2026-02-17 02:24:56.599732 | Job console starting 2026-02-17 02:24:56.610972 | Updating git repos 2026-02-17 02:24:56.693634 | Cloning repos into workspace 2026-02-17 02:24:56.880855 | Restoring repo states 2026-02-17 02:24:56.898900 | Merging changes 2026-02-17 02:24:56.898920 | Checking out repos 2026-02-17 02:24:57.156306 | Preparing playbooks 2026-02-17 02:24:57.879564 | Running Ansible setup 2026-02-17 02:25:02.236397 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-17 02:25:03.000768 | 2026-02-17 02:25:03.000976 | PLAY [Base pre] 2026-02-17 02:25:03.021029 | 2026-02-17 02:25:03.021179 | TASK [Setup log path fact] 2026-02-17 02:25:03.053349 | orchestrator | ok 2026-02-17 02:25:03.071496 | 2026-02-17 02:25:03.071635 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-17 02:25:03.112664 | orchestrator | ok 2026-02-17 02:25:03.125047 | 2026-02-17 02:25:03.125170 | TASK [emit-job-header : Print job information] 2026-02-17 02:25:03.182695 | # Job Information 2026-02-17 02:25:03.182985 | Ansible Version: 2.16.14 2026-02-17 02:25:03.183038 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-17 02:25:03.183088 | Pipeline: periodic-midnight 2026-02-17 02:25:03.183123 | Executor: 521e9411259a 2026-02-17 02:25:03.183154 | Triggered by: https://github.com/osism/testbed 2026-02-17 02:25:03.183186 | Event ID: 86521932133646daa38ed4f24d987e29 2026-02-17 02:25:03.192472 | 2026-02-17 02:25:03.192609 | LOOP [emit-job-header : Print node information] 2026-02-17 02:25:03.319821 | orchestrator | ok: 2026-02-17 02:25:03.320069 | orchestrator | # Node Information 2026-02-17 02:25:03.320114 | orchestrator | Inventory Hostname: orchestrator 2026-02-17 02:25:03.320144 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-17 02:25:03.320171 | orchestrator | Username: zuul-testbed03 2026-02-17 02:25:03.320196 | orchestrator | Distro: Debian 12.13 2026-02-17 02:25:03.320224 | orchestrator | Provider: static-testbed 2026-02-17 02:25:03.320269 | orchestrator | Region: 2026-02-17 02:25:03.320296 | orchestrator | Label: testbed-orchestrator 2026-02-17 02:25:03.320320 | orchestrator | Product Name: OpenStack Nova 2026-02-17 02:25:03.320344 | orchestrator | Interface IP: 81.163.193.140 2026-02-17 02:25:03.347335 | 2026-02-17 02:25:03.347504 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-17 02:25:03.889264 | orchestrator -> localhost | changed 2026-02-17 02:25:03.898114 | 2026-02-17 02:25:03.898294 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-17 02:25:05.010058 | orchestrator -> localhost | changed 2026-02-17 02:25:05.025899 | 2026-02-17 02:25:05.026032 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-17 02:25:05.315168 | orchestrator -> localhost | ok 2026-02-17 02:25:05.329709 | 2026-02-17 02:25:05.329884 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-17 02:25:05.363857 | orchestrator | ok 2026-02-17 02:25:05.385526 | orchestrator | included: /var/lib/zuul/builds/60dbd9ca26984ddd92da8341bdfc7b56/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-17 02:25:05.394635 | 2026-02-17 02:25:05.394746 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-17 02:25:06.667651 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-17 02:25:06.667871 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/60dbd9ca26984ddd92da8341bdfc7b56/work/60dbd9ca26984ddd92da8341bdfc7b56_id_rsa 2026-02-17 02:25:06.667912 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/60dbd9ca26984ddd92da8341bdfc7b56/work/60dbd9ca26984ddd92da8341bdfc7b56_id_rsa.pub 2026-02-17 02:25:06.667939 | orchestrator -> localhost | The key fingerprint is: 2026-02-17 02:25:06.667963 | orchestrator -> localhost | SHA256:vImW0eggeUPTHPUoBUhEWHFkvSMPWzO/ZjAsZyjJ2bo zuul-build-sshkey 2026-02-17 02:25:06.667986 | orchestrator -> localhost | The key's randomart image is: 2026-02-17 02:25:06.668020 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-17 02:25:06.668042 | orchestrator -> localhost | | **+*+o | 2026-02-17 02:25:06.668064 | orchestrator -> localhost | | . .= o.o | 2026-02-17 02:25:06.668085 | orchestrator -> localhost | | o + ... | 2026-02-17 02:25:06.668105 | orchestrator -> localhost | | o .o=* | 2026-02-17 02:25:06.668125 | orchestrator -> localhost | | o.++oOS= | 2026-02-17 02:25:06.668154 | orchestrator -> localhost | | o=+=+Bo. | 2026-02-17 02:25:06.668175 | orchestrator -> localhost | | o=+oo . | 2026-02-17 02:25:06.668195 | orchestrator -> localhost | | .. + | 2026-02-17 02:25:06.668216 | orchestrator -> localhost | | E. o | 2026-02-17 02:25:06.668254 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-17 02:25:06.668308 | orchestrator -> localhost | ok: Runtime: 0:00:00.758351 2026-02-17 02:25:06.676399 | 2026-02-17 02:25:06.676518 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-17 02:25:06.715380 | orchestrator | ok 2026-02-17 02:25:06.729144 | orchestrator | included: /var/lib/zuul/builds/60dbd9ca26984ddd92da8341bdfc7b56/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-17 02:25:06.738611 | 2026-02-17 02:25:06.738714 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-17 02:25:06.762625 | orchestrator | skipping: Conditional result was False 2026-02-17 02:25:06.770558 | 2026-02-17 02:25:06.770664 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-17 02:25:07.481415 | orchestrator | changed 2026-02-17 02:25:07.490325 | 2026-02-17 02:25:07.490450 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-17 02:25:07.813606 | orchestrator | ok 2026-02-17 02:25:07.822349 | 2026-02-17 02:25:07.822475 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-17 02:25:08.268741 | orchestrator | ok 2026-02-17 02:25:08.276895 | 2026-02-17 02:25:08.277075 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-17 02:25:08.762121 | orchestrator | ok 2026-02-17 02:25:08.769510 | 2026-02-17 02:25:08.769625 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-17 02:25:08.803820 | orchestrator | skipping: Conditional result was False 2026-02-17 02:25:08.817843 | 2026-02-17 02:25:08.817999 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-17 02:25:09.312595 | orchestrator -> localhost | changed 2026-02-17 02:25:09.327132 | 2026-02-17 02:25:09.327267 | TASK [add-build-sshkey : Add back temp key] 2026-02-17 02:25:09.684978 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/60dbd9ca26984ddd92da8341bdfc7b56/work/60dbd9ca26984ddd92da8341bdfc7b56_id_rsa (zuul-build-sshkey) 2026-02-17 02:25:09.685558 | orchestrator -> localhost | ok: Runtime: 0:00:00.020788 2026-02-17 02:25:09.701420 | 2026-02-17 02:25:09.701580 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-17 02:25:10.297094 | orchestrator | ok 2026-02-17 02:25:10.305982 | 2026-02-17 02:25:10.306116 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-17 02:25:10.341025 | orchestrator | skipping: Conditional result was False 2026-02-17 02:25:10.405445 | 2026-02-17 02:25:10.405584 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-17 02:25:10.853936 | orchestrator | ok 2026-02-17 02:25:10.871540 | 2026-02-17 02:25:10.871681 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-17 02:25:10.916664 | orchestrator | ok 2026-02-17 02:25:10.930087 | 2026-02-17 02:25:10.930311 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-17 02:25:11.220265 | orchestrator -> localhost | ok 2026-02-17 02:25:11.236256 | 2026-02-17 02:25:11.236434 | TASK [validate-host : Collect information about the host] 2026-02-17 02:25:12.475858 | orchestrator | ok 2026-02-17 02:25:12.503930 | 2026-02-17 02:25:12.504132 | TASK [validate-host : Sanitize hostname] 2026-02-17 02:25:12.571155 | orchestrator | ok 2026-02-17 02:25:12.577066 | 2026-02-17 02:25:12.577170 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-17 02:25:13.189101 | orchestrator -> localhost | changed 2026-02-17 02:25:13.204602 | 2026-02-17 02:25:13.204824 | TASK [validate-host : Collect information about zuul worker] 2026-02-17 02:25:13.693404 | orchestrator | ok 2026-02-17 02:25:13.698948 | 2026-02-17 02:25:13.699063 | TASK [validate-host : Write out all zuul information for each host] 2026-02-17 02:25:14.241722 | orchestrator -> localhost | changed 2026-02-17 02:25:14.263356 | 2026-02-17 02:25:14.263512 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-17 02:25:14.624190 | orchestrator | ok 2026-02-17 02:25:14.634101 | 2026-02-17 02:25:14.634273 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-17 02:25:32.301537 | orchestrator | changed: 2026-02-17 02:25:32.301896 | orchestrator | .d..t...... src/ 2026-02-17 02:25:32.301968 | orchestrator | .d..t...... src/github.com/ 2026-02-17 02:25:32.302021 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-17 02:25:32.302066 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-17 02:25:32.302107 | orchestrator | RedHat.yml 2026-02-17 02:25:32.322900 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-17 02:25:32.322920 | orchestrator | RedHat.yml 2026-02-17 02:25:32.322977 | orchestrator | = 1.53.0"... 2026-02-17 02:25:44.485781 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-17 02:25:44.982259 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-17 02:25:45.724831 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-17 02:25:45.791346 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-17 02:25:46.375760 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-17 02:25:46.968390 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-17 02:25:47.936109 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-17 02:25:47.936196 | orchestrator | 2026-02-17 02:25:47.936219 | orchestrator | Providers are signed by their developers. 2026-02-17 02:25:47.936236 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-17 02:25:47.936247 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-17 02:25:47.936269 | orchestrator | 2026-02-17 02:25:47.936279 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-17 02:25:47.936304 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-17 02:25:47.936314 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-17 02:25:47.936323 | orchestrator | you run "tofu init" in the future. 2026-02-17 02:25:47.936530 | orchestrator | 2026-02-17 02:25:47.936616 | orchestrator | OpenTofu has been successfully initialized! 2026-02-17 02:25:47.936630 | orchestrator | 2026-02-17 02:25:47.936639 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-17 02:25:47.936648 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-17 02:25:47.936657 | orchestrator | should now work. 2026-02-17 02:25:47.936667 | orchestrator | 2026-02-17 02:25:47.936676 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-17 02:25:47.936686 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-17 02:25:47.936695 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-17 02:25:48.242251 | orchestrator | Created and switched to workspace "ci"! 2026-02-17 02:25:48.242295 | orchestrator | 2026-02-17 02:25:48.242302 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-17 02:25:48.242308 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-17 02:25:48.242312 | orchestrator | for this configuration. 2026-02-17 02:25:48.339982 | orchestrator | ci.auto.tfvars 2026-02-17 02:25:48.964246 | orchestrator | default_custom.tf 2026-02-17 02:25:51.826658 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-17 02:25:52.307263 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-17 02:25:52.535305 | orchestrator | 2026-02-17 02:25:52.535380 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-17 02:25:52.535390 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-17 02:25:52.535397 | orchestrator | + create 2026-02-17 02:25:52.535412 | orchestrator | <= read (data resources) 2026-02-17 02:25:52.535420 | orchestrator | 2026-02-17 02:25:52.535427 | orchestrator | OpenTofu will perform the following actions: 2026-02-17 02:25:52.535433 | orchestrator | 2026-02-17 02:25:52.535439 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-17 02:25:52.535445 | orchestrator | # (config refers to values not yet known) 2026-02-17 02:25:52.535452 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-17 02:25:52.535458 | orchestrator | + checksum = (known after apply) 2026-02-17 02:25:52.535464 | orchestrator | + created_at = (known after apply) 2026-02-17 02:25:52.535470 | orchestrator | + file = (known after apply) 2026-02-17 02:25:52.535476 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.535498 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.535504 | orchestrator | + min_disk_gb = (known after apply) 2026-02-17 02:25:52.535510 | orchestrator | + min_ram_mb = (known after apply) 2026-02-17 02:25:52.535516 | orchestrator | + most_recent = true 2026-02-17 02:25:52.535522 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.535528 | orchestrator | + protected = (known after apply) 2026-02-17 02:25:52.535534 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.535550 | orchestrator | + schema = (known after apply) 2026-02-17 02:25:52.535556 | orchestrator | + size_bytes = (known after apply) 2026-02-17 02:25:52.535562 | orchestrator | + tags = (known after apply) 2026-02-17 02:25:52.535596 | orchestrator | + updated_at = (known after apply) 2026-02-17 02:25:52.535602 | orchestrator | } 2026-02-17 02:25:52.535612 | orchestrator | 2026-02-17 02:25:52.535618 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-17 02:25:52.535624 | orchestrator | # (config refers to values not yet known) 2026-02-17 02:25:52.535629 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-17 02:25:52.535635 | orchestrator | + checksum = (known after apply) 2026-02-17 02:25:52.535641 | orchestrator | + created_at = (known after apply) 2026-02-17 02:25:52.535647 | orchestrator | + file = (known after apply) 2026-02-17 02:25:52.535653 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.535659 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.535664 | orchestrator | + min_disk_gb = (known after apply) 2026-02-17 02:25:52.535671 | orchestrator | + min_ram_mb = (known after apply) 2026-02-17 02:25:52.535677 | orchestrator | + most_recent = true 2026-02-17 02:25:52.535682 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.535688 | orchestrator | + protected = (known after apply) 2026-02-17 02:25:52.535694 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.535700 | orchestrator | + schema = (known after apply) 2026-02-17 02:25:52.535706 | orchestrator | + size_bytes = (known after apply) 2026-02-17 02:25:52.535711 | orchestrator | + tags = (known after apply) 2026-02-17 02:25:52.535717 | orchestrator | + updated_at = (known after apply) 2026-02-17 02:25:52.535723 | orchestrator | } 2026-02-17 02:25:52.535731 | orchestrator | 2026-02-17 02:25:52.535737 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-17 02:25:52.535743 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-17 02:25:52.535749 | orchestrator | + content = (known after apply) 2026-02-17 02:25:52.535763 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-17 02:25:52.535770 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-17 02:25:52.535775 | orchestrator | + content_md5 = (known after apply) 2026-02-17 02:25:52.535781 | orchestrator | + content_sha1 = (known after apply) 2026-02-17 02:25:52.535787 | orchestrator | + content_sha256 = (known after apply) 2026-02-17 02:25:52.535793 | orchestrator | + content_sha512 = (known after apply) 2026-02-17 02:25:52.535798 | orchestrator | + directory_permission = "0777" 2026-02-17 02:25:52.535804 | orchestrator | + file_permission = "0644" 2026-02-17 02:25:52.535811 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-17 02:25:52.535817 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.535822 | orchestrator | } 2026-02-17 02:25:52.535829 | orchestrator | 2026-02-17 02:25:52.535835 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-17 02:25:52.535841 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-17 02:25:52.535846 | orchestrator | + content = (known after apply) 2026-02-17 02:25:52.535852 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-17 02:25:52.535858 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-17 02:25:52.535864 | orchestrator | + content_md5 = (known after apply) 2026-02-17 02:25:52.535870 | orchestrator | + content_sha1 = (known after apply) 2026-02-17 02:25:52.535875 | orchestrator | + content_sha256 = (known after apply) 2026-02-17 02:25:52.535890 | orchestrator | + content_sha512 = (known after apply) 2026-02-17 02:25:52.535896 | orchestrator | + directory_permission = "0777" 2026-02-17 02:25:52.535902 | orchestrator | + file_permission = "0644" 2026-02-17 02:25:52.535915 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-17 02:25:52.535921 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.535927 | orchestrator | } 2026-02-17 02:25:52.535937 | orchestrator | 2026-02-17 02:25:52.535944 | orchestrator | # local_file.inventory will be created 2026-02-17 02:25:52.535950 | orchestrator | + resource "local_file" "inventory" { 2026-02-17 02:25:52.535957 | orchestrator | + content = (known after apply) 2026-02-17 02:25:52.535963 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-17 02:25:52.535969 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-17 02:25:52.535975 | orchestrator | + content_md5 = (known after apply) 2026-02-17 02:25:52.535982 | orchestrator | + content_sha1 = (known after apply) 2026-02-17 02:25:52.535989 | orchestrator | + content_sha256 = (known after apply) 2026-02-17 02:25:52.535995 | orchestrator | + content_sha512 = (known after apply) 2026-02-17 02:25:52.536001 | orchestrator | + directory_permission = "0777" 2026-02-17 02:25:52.536007 | orchestrator | + file_permission = "0644" 2026-02-17 02:25:52.536014 | orchestrator | + filename = "inventory.ci" 2026-02-17 02:25:52.536020 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536026 | orchestrator | } 2026-02-17 02:25:52.536033 | orchestrator | 2026-02-17 02:25:52.536039 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-17 02:25:52.536045 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-17 02:25:52.536051 | orchestrator | + content = (sensitive value) 2026-02-17 02:25:52.536057 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-17 02:25:52.536063 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-17 02:25:52.536070 | orchestrator | + content_md5 = (known after apply) 2026-02-17 02:25:52.536076 | orchestrator | + content_sha1 = (known after apply) 2026-02-17 02:25:52.536082 | orchestrator | + content_sha256 = (known after apply) 2026-02-17 02:25:52.536088 | orchestrator | + content_sha512 = (known after apply) 2026-02-17 02:25:52.536095 | orchestrator | + directory_permission = "0700" 2026-02-17 02:25:52.536101 | orchestrator | + file_permission = "0600" 2026-02-17 02:25:52.536107 | orchestrator | + filename = ".id_rsa.ci" 2026-02-17 02:25:52.536114 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536120 | orchestrator | } 2026-02-17 02:25:52.536126 | orchestrator | 2026-02-17 02:25:52.536132 | orchestrator | # null_resource.node_semaphore will be created 2026-02-17 02:25:52.536139 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-17 02:25:52.536145 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536151 | orchestrator | } 2026-02-17 02:25:52.536160 | orchestrator | 2026-02-17 02:25:52.536166 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-17 02:25:52.536173 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-17 02:25:52.536179 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536185 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536191 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536198 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.536204 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536210 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-17 02:25:52.536216 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536223 | orchestrator | + size = 80 2026-02-17 02:25:52.536229 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536235 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536242 | orchestrator | } 2026-02-17 02:25:52.536248 | orchestrator | 2026-02-17 02:25:52.536255 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-17 02:25:52.536261 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-17 02:25:52.536267 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536274 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536280 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536292 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.536298 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536304 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-17 02:25:52.536310 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536316 | orchestrator | + size = 80 2026-02-17 02:25:52.536321 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536328 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536334 | orchestrator | } 2026-02-17 02:25:52.536340 | orchestrator | 2026-02-17 02:25:52.536346 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-17 02:25:52.536352 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-17 02:25:52.536359 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536365 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536371 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536377 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.536383 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536389 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-17 02:25:52.536395 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536402 | orchestrator | + size = 80 2026-02-17 02:25:52.536408 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536414 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536421 | orchestrator | } 2026-02-17 02:25:52.536429 | orchestrator | 2026-02-17 02:25:52.536436 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-17 02:25:52.536442 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-17 02:25:52.536447 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536453 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536459 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536465 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.536472 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536478 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-17 02:25:52.536484 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536490 | orchestrator | + size = 80 2026-02-17 02:25:52.536501 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536507 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536513 | orchestrator | } 2026-02-17 02:25:52.536520 | orchestrator | 2026-02-17 02:25:52.536526 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-17 02:25:52.536533 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-17 02:25:52.536539 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536545 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536551 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536557 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.536596 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536604 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-17 02:25:52.536610 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536616 | orchestrator | + size = 80 2026-02-17 02:25:52.536623 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536629 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536635 | orchestrator | } 2026-02-17 02:25:52.536641 | orchestrator | 2026-02-17 02:25:52.536647 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-17 02:25:52.536653 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-17 02:25:52.536660 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536666 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536672 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536683 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.536689 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536696 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-17 02:25:52.536702 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536708 | orchestrator | + size = 80 2026-02-17 02:25:52.536715 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536721 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536727 | orchestrator | } 2026-02-17 02:25:52.536733 | orchestrator | 2026-02-17 02:25:52.536739 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-17 02:25:52.536745 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-17 02:25:52.536751 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536758 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536764 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536770 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.536777 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536783 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-17 02:25:52.536789 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536795 | orchestrator | + size = 80 2026-02-17 02:25:52.536801 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536807 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536814 | orchestrator | } 2026-02-17 02:25:52.536823 | orchestrator | 2026-02-17 02:25:52.536829 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-17 02:25:52.536836 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.536843 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536849 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536856 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536862 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536868 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-17 02:25:52.536875 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536881 | orchestrator | + size = 20 2026-02-17 02:25:52.536887 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536894 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536900 | orchestrator | } 2026-02-17 02:25:52.536907 | orchestrator | 2026-02-17 02:25:52.536913 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-17 02:25:52.536920 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.536927 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.536933 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.536939 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.536945 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.536951 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-17 02:25:52.536957 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.536964 | orchestrator | + size = 20 2026-02-17 02:25:52.536970 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.536976 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.536982 | orchestrator | } 2026-02-17 02:25:52.536989 | orchestrator | 2026-02-17 02:25:52.536995 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-17 02:25:52.537001 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.537007 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.537014 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537020 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537026 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.537032 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-17 02:25:52.537039 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.537049 | orchestrator | + size = 20 2026-02-17 02:25:52.537055 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.537061 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.537068 | orchestrator | } 2026-02-17 02:25:52.537074 | orchestrator | 2026-02-17 02:25:52.537081 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-17 02:25:52.537087 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.537093 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.537100 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537106 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537116 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.537123 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-17 02:25:52.537129 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.537136 | orchestrator | + size = 20 2026-02-17 02:25:52.537142 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.537149 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.537155 | orchestrator | } 2026-02-17 02:25:52.537162 | orchestrator | 2026-02-17 02:25:52.537168 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-17 02:25:52.537175 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.537181 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.537188 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537194 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537201 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.537208 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-17 02:25:52.537214 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.537221 | orchestrator | + size = 20 2026-02-17 02:25:52.537227 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.537234 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.537240 | orchestrator | } 2026-02-17 02:25:52.537249 | orchestrator | 2026-02-17 02:25:52.537256 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-17 02:25:52.537262 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.537269 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.537275 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537282 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537288 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.537295 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-17 02:25:52.537301 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.537308 | orchestrator | + size = 20 2026-02-17 02:25:52.537314 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.537321 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.537328 | orchestrator | } 2026-02-17 02:25:52.537334 | orchestrator | 2026-02-17 02:25:52.537340 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-17 02:25:52.537346 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.537351 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.537357 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537363 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537369 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.537375 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-17 02:25:52.537381 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.537388 | orchestrator | + size = 20 2026-02-17 02:25:52.537394 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.537400 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.537406 | orchestrator | } 2026-02-17 02:25:52.537413 | orchestrator | 2026-02-17 02:25:52.537419 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-17 02:25:52.537426 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.537437 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.537444 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537450 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537457 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.537463 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-17 02:25:52.537470 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.537476 | orchestrator | + size = 20 2026-02-17 02:25:52.537482 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.537489 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.537495 | orchestrator | } 2026-02-17 02:25:52.537501 | orchestrator | 2026-02-17 02:25:52.537507 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-17 02:25:52.537513 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-17 02:25:52.537519 | orchestrator | + attachment = (known after apply) 2026-02-17 02:25:52.537526 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537532 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537538 | orchestrator | + metadata = (known after apply) 2026-02-17 02:25:52.537544 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-17 02:25:52.537551 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.537557 | orchestrator | + size = 20 2026-02-17 02:25:52.537574 | orchestrator | + volume_retype_policy = "never" 2026-02-17 02:25:52.537582 | orchestrator | + volume_type = "ssd" 2026-02-17 02:25:52.537588 | orchestrator | } 2026-02-17 02:25:52.537597 | orchestrator | 2026-02-17 02:25:52.537604 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-17 02:25:52.537610 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-17 02:25:52.537616 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-17 02:25:52.537623 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-17 02:25:52.537629 | orchestrator | + all_metadata = (known after apply) 2026-02-17 02:25:52.537635 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.537641 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537648 | orchestrator | + config_drive = true 2026-02-17 02:25:52.537658 | orchestrator | + created = (known after apply) 2026-02-17 02:25:52.537664 | orchestrator | + flavor_id = (known after apply) 2026-02-17 02:25:52.537670 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-17 02:25:52.537676 | orchestrator | + force_delete = false 2026-02-17 02:25:52.537682 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-17 02:25:52.537688 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537695 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.537701 | orchestrator | + image_name = (known after apply) 2026-02-17 02:25:52.537707 | orchestrator | + key_pair = "testbed" 2026-02-17 02:25:52.537713 | orchestrator | + name = "testbed-manager" 2026-02-17 02:25:52.537720 | orchestrator | + power_state = "active" 2026-02-17 02:25:52.537726 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.537732 | orchestrator | + security_groups = (known after apply) 2026-02-17 02:25:52.537738 | orchestrator | + stop_before_destroy = false 2026-02-17 02:25:52.537744 | orchestrator | + updated = (known after apply) 2026-02-17 02:25:52.537750 | orchestrator | + user_data = (sensitive value) 2026-02-17 02:25:52.537756 | orchestrator | 2026-02-17 02:25:52.537763 | orchestrator | + block_device { 2026-02-17 02:25:52.537770 | orchestrator | + boot_index = 0 2026-02-17 02:25:52.537776 | orchestrator | + delete_on_termination = false 2026-02-17 02:25:52.537782 | orchestrator | + destination_type = "volume" 2026-02-17 02:25:52.537788 | orchestrator | + multiattach = false 2026-02-17 02:25:52.537795 | orchestrator | + source_type = "volume" 2026-02-17 02:25:52.537801 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.537812 | orchestrator | } 2026-02-17 02:25:52.537818 | orchestrator | 2026-02-17 02:25:52.537825 | orchestrator | + network { 2026-02-17 02:25:52.537831 | orchestrator | + access_network = false 2026-02-17 02:25:52.537838 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-17 02:25:52.537844 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-17 02:25:52.537850 | orchestrator | + mac = (known after apply) 2026-02-17 02:25:52.537856 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.537862 | orchestrator | + port = (known after apply) 2026-02-17 02:25:52.537869 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.537875 | orchestrator | } 2026-02-17 02:25:52.537882 | orchestrator | } 2026-02-17 02:25:52.537888 | orchestrator | 2026-02-17 02:25:52.537894 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-17 02:25:52.537901 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-17 02:25:52.537907 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-17 02:25:52.537913 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-17 02:25:52.537919 | orchestrator | + all_metadata = (known after apply) 2026-02-17 02:25:52.537926 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.537932 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.537938 | orchestrator | + config_drive = true 2026-02-17 02:25:52.537944 | orchestrator | + created = (known after apply) 2026-02-17 02:25:52.537950 | orchestrator | + flavor_id = (known after apply) 2026-02-17 02:25:52.537956 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-17 02:25:52.537961 | orchestrator | + force_delete = false 2026-02-17 02:25:52.537967 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-17 02:25:52.537973 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.537979 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.537985 | orchestrator | + image_name = (known after apply) 2026-02-17 02:25:52.537991 | orchestrator | + key_pair = "testbed" 2026-02-17 02:25:52.537998 | orchestrator | + name = "testbed-node-0" 2026-02-17 02:25:52.538004 | orchestrator | + power_state = "active" 2026-02-17 02:25:52.538010 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.538054 | orchestrator | + security_groups = (known after apply) 2026-02-17 02:25:52.538061 | orchestrator | + stop_before_destroy = false 2026-02-17 02:25:52.538067 | orchestrator | + updated = (known after apply) 2026-02-17 02:25:52.538072 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-17 02:25:52.538079 | orchestrator | 2026-02-17 02:25:52.538086 | orchestrator | + block_device { 2026-02-17 02:25:52.538093 | orchestrator | + boot_index = 0 2026-02-17 02:25:52.538099 | orchestrator | + delete_on_termination = false 2026-02-17 02:25:52.538105 | orchestrator | + destination_type = "volume" 2026-02-17 02:25:52.538111 | orchestrator | + multiattach = false 2026-02-17 02:25:52.538118 | orchestrator | + source_type = "volume" 2026-02-17 02:25:52.538124 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.538131 | orchestrator | } 2026-02-17 02:25:52.538137 | orchestrator | 2026-02-17 02:25:52.538143 | orchestrator | + network { 2026-02-17 02:25:52.538150 | orchestrator | + access_network = false 2026-02-17 02:25:52.538156 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-17 02:25:52.538163 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-17 02:25:52.538169 | orchestrator | + mac = (known after apply) 2026-02-17 02:25:52.538176 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.538182 | orchestrator | + port = (known after apply) 2026-02-17 02:25:52.538189 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.538195 | orchestrator | } 2026-02-17 02:25:52.538201 | orchestrator | } 2026-02-17 02:25:52.538212 | orchestrator | 2026-02-17 02:25:52.538219 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-17 02:25:52.538225 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-17 02:25:52.538231 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-17 02:25:52.538242 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-17 02:25:52.538248 | orchestrator | + all_metadata = (known after apply) 2026-02-17 02:25:52.538254 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.538260 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.538266 | orchestrator | + config_drive = true 2026-02-17 02:25:52.538272 | orchestrator | + created = (known after apply) 2026-02-17 02:25:52.538278 | orchestrator | + flavor_id = (known after apply) 2026-02-17 02:25:52.538283 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-17 02:25:52.538290 | orchestrator | + force_delete = false 2026-02-17 02:25:52.538295 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-17 02:25:52.538301 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.538307 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.538314 | orchestrator | + image_name = (known after apply) 2026-02-17 02:25:52.538320 | orchestrator | + key_pair = "testbed" 2026-02-17 02:25:52.538326 | orchestrator | + name = "testbed-node-1" 2026-02-17 02:25:52.538332 | orchestrator | + power_state = "active" 2026-02-17 02:25:52.538338 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.538345 | orchestrator | + security_groups = (known after apply) 2026-02-17 02:25:52.538350 | orchestrator | + stop_before_destroy = false 2026-02-17 02:25:52.538356 | orchestrator | + updated = (known after apply) 2026-02-17 02:25:52.538366 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-17 02:25:52.538372 | orchestrator | 2026-02-17 02:25:52.538378 | orchestrator | + block_device { 2026-02-17 02:25:52.538384 | orchestrator | + boot_index = 0 2026-02-17 02:25:52.538390 | orchestrator | + delete_on_termination = false 2026-02-17 02:25:52.538396 | orchestrator | + destination_type = "volume" 2026-02-17 02:25:52.538402 | orchestrator | + multiattach = false 2026-02-17 02:25:52.538408 | orchestrator | + source_type = "volume" 2026-02-17 02:25:52.538413 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.538419 | orchestrator | } 2026-02-17 02:25:52.538425 | orchestrator | 2026-02-17 02:25:52.538431 | orchestrator | + network { 2026-02-17 02:25:52.538437 | orchestrator | + access_network = false 2026-02-17 02:25:52.538443 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-17 02:25:52.538449 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-17 02:25:52.538455 | orchestrator | + mac = (known after apply) 2026-02-17 02:25:52.538461 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.538467 | orchestrator | + port = (known after apply) 2026-02-17 02:25:52.538473 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.538479 | orchestrator | } 2026-02-17 02:25:52.538486 | orchestrator | } 2026-02-17 02:25:52.538492 | orchestrator | 2026-02-17 02:25:52.538498 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-17 02:25:52.538504 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-17 02:25:52.538510 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-17 02:25:52.538516 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-17 02:25:52.538523 | orchestrator | + all_metadata = (known after apply) 2026-02-17 02:25:52.538529 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.538535 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.538541 | orchestrator | + config_drive = true 2026-02-17 02:25:52.538547 | orchestrator | + created = (known after apply) 2026-02-17 02:25:52.538553 | orchestrator | + flavor_id = (known after apply) 2026-02-17 02:25:52.538559 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-17 02:25:52.538596 | orchestrator | + force_delete = false 2026-02-17 02:25:52.538602 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-17 02:25:52.538608 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.538615 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.538631 | orchestrator | + image_name = (known after apply) 2026-02-17 02:25:52.538637 | orchestrator | + key_pair = "testbed" 2026-02-17 02:25:52.538643 | orchestrator | + name = "testbed-node-2" 2026-02-17 02:25:52.538650 | orchestrator | + power_state = "active" 2026-02-17 02:25:52.538656 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.538661 | orchestrator | + security_groups = (known after apply) 2026-02-17 02:25:52.538668 | orchestrator | + stop_before_destroy = false 2026-02-17 02:25:52.538674 | orchestrator | + updated = (known after apply) 2026-02-17 02:25:52.538680 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-17 02:25:52.538686 | orchestrator | 2026-02-17 02:25:52.538692 | orchestrator | + block_device { 2026-02-17 02:25:52.538698 | orchestrator | + boot_index = 0 2026-02-17 02:25:52.538705 | orchestrator | + delete_on_termination = false 2026-02-17 02:25:52.538712 | orchestrator | + destination_type = "volume" 2026-02-17 02:25:52.538717 | orchestrator | + multiattach = false 2026-02-17 02:25:52.538724 | orchestrator | + source_type = "volume" 2026-02-17 02:25:52.538730 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.538736 | orchestrator | } 2026-02-17 02:25:52.538742 | orchestrator | 2026-02-17 02:25:52.538748 | orchestrator | + network { 2026-02-17 02:25:52.538755 | orchestrator | + access_network = false 2026-02-17 02:25:52.538761 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-17 02:25:52.538767 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-17 02:25:52.538773 | orchestrator | + mac = (known after apply) 2026-02-17 02:25:52.538779 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.538785 | orchestrator | + port = (known after apply) 2026-02-17 02:25:52.538791 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.538797 | orchestrator | } 2026-02-17 02:25:52.538803 | orchestrator | } 2026-02-17 02:25:52.538812 | orchestrator | 2026-02-17 02:25:52.538821 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-17 02:25:52.538827 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-17 02:25:52.538833 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-17 02:25:52.538839 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-17 02:25:52.538845 | orchestrator | + all_metadata = (known after apply) 2026-02-17 02:25:52.538851 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.538858 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.538864 | orchestrator | + config_drive = true 2026-02-17 02:25:52.538869 | orchestrator | + created = (known after apply) 2026-02-17 02:25:52.538875 | orchestrator | + flavor_id = (known after apply) 2026-02-17 02:25:52.538881 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-17 02:25:52.538887 | orchestrator | + force_delete = false 2026-02-17 02:25:52.538893 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-17 02:25:52.538899 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.538905 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.538912 | orchestrator | + image_name = (known after apply) 2026-02-17 02:25:52.538918 | orchestrator | + key_pair = "testbed" 2026-02-17 02:25:52.538924 | orchestrator | + name = "testbed-node-3" 2026-02-17 02:25:52.538930 | orchestrator | + power_state = "active" 2026-02-17 02:25:52.538937 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.538943 | orchestrator | + security_groups = (known after apply) 2026-02-17 02:25:52.538949 | orchestrator | + stop_before_destroy = false 2026-02-17 02:25:52.538955 | orchestrator | + updated = (known after apply) 2026-02-17 02:25:52.538961 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-17 02:25:52.538967 | orchestrator | 2026-02-17 02:25:52.538972 | orchestrator | + block_device { 2026-02-17 02:25:52.538978 | orchestrator | + boot_index = 0 2026-02-17 02:25:52.538984 | orchestrator | + delete_on_termination = false 2026-02-17 02:25:52.538990 | orchestrator | + destination_type = "volume" 2026-02-17 02:25:52.539000 | orchestrator | + multiattach = false 2026-02-17 02:25:52.539006 | orchestrator | + source_type = "volume" 2026-02-17 02:25:52.539012 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.539018 | orchestrator | } 2026-02-17 02:25:52.539023 | orchestrator | 2026-02-17 02:25:52.539029 | orchestrator | + network { 2026-02-17 02:25:52.539036 | orchestrator | + access_network = false 2026-02-17 02:25:52.539042 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-17 02:25:52.539048 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-17 02:25:52.539054 | orchestrator | + mac = (known after apply) 2026-02-17 02:25:52.539059 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.539065 | orchestrator | + port = (known after apply) 2026-02-17 02:25:52.539071 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.539077 | orchestrator | } 2026-02-17 02:25:52.539083 | orchestrator | } 2026-02-17 02:25:52.539089 | orchestrator | 2026-02-17 02:25:52.539095 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-17 02:25:52.539100 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-17 02:25:52.539106 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-17 02:25:52.539112 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-17 02:25:52.539118 | orchestrator | + all_metadata = (known after apply) 2026-02-17 02:25:52.539123 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.539129 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.539135 | orchestrator | + config_drive = true 2026-02-17 02:25:52.539141 | orchestrator | + created = (known after apply) 2026-02-17 02:25:52.539147 | orchestrator | + flavor_id = (known after apply) 2026-02-17 02:25:52.539153 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-17 02:25:52.539159 | orchestrator | + force_delete = false 2026-02-17 02:25:52.539164 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-17 02:25:52.539170 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.539175 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.539181 | orchestrator | + image_name = (known after apply) 2026-02-17 02:25:52.539187 | orchestrator | + key_pair = "testbed" 2026-02-17 02:25:52.539193 | orchestrator | + name = "testbed-node-4" 2026-02-17 02:25:52.539199 | orchestrator | + power_state = "active" 2026-02-17 02:25:52.539205 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.539212 | orchestrator | + security_groups = (known after apply) 2026-02-17 02:25:52.539218 | orchestrator | + stop_before_destroy = false 2026-02-17 02:25:52.539224 | orchestrator | + updated = (known after apply) 2026-02-17 02:25:52.539230 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-17 02:25:52.539237 | orchestrator | 2026-02-17 02:25:52.539243 | orchestrator | + block_device { 2026-02-17 02:25:52.539249 | orchestrator | + boot_index = 0 2026-02-17 02:25:52.539255 | orchestrator | + delete_on_termination = false 2026-02-17 02:25:52.539261 | orchestrator | + destination_type = "volume" 2026-02-17 02:25:52.539266 | orchestrator | + multiattach = false 2026-02-17 02:25:52.539273 | orchestrator | + source_type = "volume" 2026-02-17 02:25:52.539278 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.539284 | orchestrator | } 2026-02-17 02:25:52.539290 | orchestrator | 2026-02-17 02:25:52.539296 | orchestrator | + network { 2026-02-17 02:25:52.539302 | orchestrator | + access_network = false 2026-02-17 02:25:52.539308 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-17 02:25:52.539314 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-17 02:25:52.539320 | orchestrator | + mac = (known after apply) 2026-02-17 02:25:52.539326 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.539332 | orchestrator | + port = (known after apply) 2026-02-17 02:25:52.539338 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.539344 | orchestrator | } 2026-02-17 02:25:52.539350 | orchestrator | } 2026-02-17 02:25:52.539364 | orchestrator | 2026-02-17 02:25:52.539370 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-17 02:25:52.539376 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-17 02:25:52.539383 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-17 02:25:52.539389 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-17 02:25:52.539395 | orchestrator | + all_metadata = (known after apply) 2026-02-17 02:25:52.539401 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.539408 | orchestrator | + availability_zone = "nova" 2026-02-17 02:25:52.539414 | orchestrator | + config_drive = true 2026-02-17 02:25:52.539420 | orchestrator | + created = (known after apply) 2026-02-17 02:25:52.539426 | orchestrator | + flavor_id = (known after apply) 2026-02-17 02:25:52.539432 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-17 02:25:52.539438 | orchestrator | + force_delete = false 2026-02-17 02:25:52.539444 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-17 02:25:52.539450 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.539456 | orchestrator | + image_id = (known after apply) 2026-02-17 02:25:52.539462 | orchestrator | + image_name = (known after apply) 2026-02-17 02:25:52.539468 | orchestrator | + key_pair = "testbed" 2026-02-17 02:25:52.539474 | orchestrator | + name = "testbed-node-5" 2026-02-17 02:25:52.539480 | orchestrator | + power_state = "active" 2026-02-17 02:25:52.539486 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.539492 | orchestrator | + security_groups = (known after apply) 2026-02-17 02:25:52.539498 | orchestrator | + stop_before_destroy = false 2026-02-17 02:25:52.539504 | orchestrator | + updated = (known after apply) 2026-02-17 02:25:52.539509 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-17 02:25:52.539515 | orchestrator | 2026-02-17 02:25:52.539521 | orchestrator | + block_device { 2026-02-17 02:25:52.539527 | orchestrator | + boot_index = 0 2026-02-17 02:25:52.539533 | orchestrator | + delete_on_termination = false 2026-02-17 02:25:52.539539 | orchestrator | + destination_type = "volume" 2026-02-17 02:25:52.539545 | orchestrator | + multiattach = false 2026-02-17 02:25:52.539551 | orchestrator | + source_type = "volume" 2026-02-17 02:25:52.539557 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.539592 | orchestrator | } 2026-02-17 02:25:52.539600 | orchestrator | 2026-02-17 02:25:52.539606 | orchestrator | + network { 2026-02-17 02:25:52.539611 | orchestrator | + access_network = false 2026-02-17 02:25:52.539617 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-17 02:25:52.539623 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-17 02:25:52.539629 | orchestrator | + mac = (known after apply) 2026-02-17 02:25:52.539635 | orchestrator | + name = (known after apply) 2026-02-17 02:25:52.539641 | orchestrator | + port = (known after apply) 2026-02-17 02:25:52.539648 | orchestrator | + uuid = (known after apply) 2026-02-17 02:25:52.539654 | orchestrator | } 2026-02-17 02:25:52.539659 | orchestrator | } 2026-02-17 02:25:52.539665 | orchestrator | 2026-02-17 02:25:52.539671 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-17 02:25:52.539678 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-17 02:25:52.539685 | orchestrator | + fingerprint = (known after apply) 2026-02-17 02:25:52.539691 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.539696 | orchestrator | + name = "testbed" 2026-02-17 02:25:52.539702 | orchestrator | + private_key = (sensitive value) 2026-02-17 02:25:52.539708 | orchestrator | + public_key = (known after apply) 2026-02-17 02:25:52.539713 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.539719 | orchestrator | + user_id = (known after apply) 2026-02-17 02:25:52.539725 | orchestrator | } 2026-02-17 02:25:52.539731 | orchestrator | 2026-02-17 02:25:52.539737 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-17 02:25:52.539743 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.539753 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.539759 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.539764 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.539770 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.539780 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.539787 | orchestrator | } 2026-02-17 02:25:52.539794 | orchestrator | 2026-02-17 02:25:52.539800 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-17 02:25:52.539806 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.539812 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.539818 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.539823 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.539829 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.539834 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.539840 | orchestrator | } 2026-02-17 02:25:52.539846 | orchestrator | 2026-02-17 02:25:52.539852 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-17 02:25:52.539858 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.539863 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.539869 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.539875 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.539881 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.539888 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.539894 | orchestrator | } 2026-02-17 02:25:52.539900 | orchestrator | 2026-02-17 02:25:52.539906 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-17 02:25:52.539912 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.539919 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.539925 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.539931 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.539937 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.539942 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.539948 | orchestrator | } 2026-02-17 02:25:52.539955 | orchestrator | 2026-02-17 02:25:52.539960 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-17 02:25:52.539966 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.539972 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.539978 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.539983 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.539990 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.539995 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.540008 | orchestrator | } 2026-02-17 02:25:52.540015 | orchestrator | 2026-02-17 02:25:52.540020 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-17 02:25:52.540027 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.540033 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.540039 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540044 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.540050 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540056 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.540062 | orchestrator | } 2026-02-17 02:25:52.540068 | orchestrator | 2026-02-17 02:25:52.540073 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-17 02:25:52.540079 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.540085 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.540090 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540096 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.540102 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540111 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.540117 | orchestrator | } 2026-02-17 02:25:52.540123 | orchestrator | 2026-02-17 02:25:52.540129 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-17 02:25:52.540134 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.540141 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.540147 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540152 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.540158 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540164 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.540169 | orchestrator | } 2026-02-17 02:25:52.540175 | orchestrator | 2026-02-17 02:25:52.540181 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-17 02:25:52.540187 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-17 02:25:52.540192 | orchestrator | + device = (known after apply) 2026-02-17 02:25:52.540198 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540204 | orchestrator | + instance_id = (known after apply) 2026-02-17 02:25:52.540210 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540215 | orchestrator | + volume_id = (known after apply) 2026-02-17 02:25:52.540221 | orchestrator | } 2026-02-17 02:25:52.540227 | orchestrator | 2026-02-17 02:25:52.540233 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-17 02:25:52.540239 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-17 02:25:52.540245 | orchestrator | + fixed_ip = (known after apply) 2026-02-17 02:25:52.540251 | orchestrator | + floating_ip = (known after apply) 2026-02-17 02:25:52.540256 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540262 | orchestrator | + port_id = (known after apply) 2026-02-17 02:25:52.540268 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540274 | orchestrator | } 2026-02-17 02:25:52.540279 | orchestrator | 2026-02-17 02:25:52.540285 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-17 02:25:52.540291 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-17 02:25:52.540297 | orchestrator | + address = (known after apply) 2026-02-17 02:25:52.540302 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.540311 | orchestrator | + dns_domain = (known after apply) 2026-02-17 02:25:52.540317 | orchestrator | + dns_name = (known after apply) 2026-02-17 02:25:52.540323 | orchestrator | + fixed_ip = (known after apply) 2026-02-17 02:25:52.540329 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540335 | orchestrator | + pool = "public" 2026-02-17 02:25:52.540340 | orchestrator | + port_id = (known after apply) 2026-02-17 02:25:52.540346 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540352 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.540357 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.540363 | orchestrator | } 2026-02-17 02:25:52.540369 | orchestrator | 2026-02-17 02:25:52.540375 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-17 02:25:52.540380 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-17 02:25:52.540387 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.540393 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.540399 | orchestrator | + availability_zone_hints = [ 2026-02-17 02:25:52.540405 | orchestrator | + "nova", 2026-02-17 02:25:52.540411 | orchestrator | ] 2026-02-17 02:25:52.540417 | orchestrator | + dns_domain = (known after apply) 2026-02-17 02:25:52.540422 | orchestrator | + external = (known after apply) 2026-02-17 02:25:52.540428 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540434 | orchestrator | + mtu = (known after apply) 2026-02-17 02:25:52.540440 | orchestrator | + name = "net-testbed-management" 2026-02-17 02:25:52.540445 | orchestrator | + port_security_enabled = (known after apply) 2026-02-17 02:25:52.540454 | orchestrator | + qos_policy_id = (known after apply) 2026-02-17 02:25:52.540460 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540466 | orchestrator | + shared = (known after apply) 2026-02-17 02:25:52.540471 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.540477 | orchestrator | + transparent_vlan = (known after apply) 2026-02-17 02:25:52.540483 | orchestrator | 2026-02-17 02:25:52.540489 | orchestrator | + segments (known after apply) 2026-02-17 02:25:52.540495 | orchestrator | } 2026-02-17 02:25:52.540504 | orchestrator | 2026-02-17 02:25:52.540510 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-17 02:25:52.540515 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-17 02:25:52.540521 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.540527 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-17 02:25:52.540533 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-17 02:25:52.540539 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.540545 | orchestrator | + device_id = (known after apply) 2026-02-17 02:25:52.540551 | orchestrator | + device_owner = (known after apply) 2026-02-17 02:25:52.540556 | orchestrator | + dns_assignment = (known after apply) 2026-02-17 02:25:52.540562 | orchestrator | + dns_name = (known after apply) 2026-02-17 02:25:52.540581 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540586 | orchestrator | + mac_address = (known after apply) 2026-02-17 02:25:52.540592 | orchestrator | + network_id = (known after apply) 2026-02-17 02:25:52.540599 | orchestrator | + port_security_enabled = (known after apply) 2026-02-17 02:25:52.540605 | orchestrator | + qos_policy_id = (known after apply) 2026-02-17 02:25:52.540610 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540616 | orchestrator | + security_group_ids = (known after apply) 2026-02-17 02:25:52.540621 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.540627 | orchestrator | 2026-02-17 02:25:52.540633 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.540639 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-17 02:25:52.540645 | orchestrator | } 2026-02-17 02:25:52.540651 | orchestrator | 2026-02-17 02:25:52.540657 | orchestrator | + binding (known after apply) 2026-02-17 02:25:52.540663 | orchestrator | 2026-02-17 02:25:52.540669 | orchestrator | + fixed_ip { 2026-02-17 02:25:52.540675 | orchestrator | + ip_address = "192.168.16.5" 2026-02-17 02:25:52.540681 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.540687 | orchestrator | } 2026-02-17 02:25:52.540693 | orchestrator | } 2026-02-17 02:25:52.540699 | orchestrator | 2026-02-17 02:25:52.540705 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-17 02:25:52.540711 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-17 02:25:52.540716 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.540722 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-17 02:25:52.540728 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-17 02:25:52.540734 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.540740 | orchestrator | + device_id = (known after apply) 2026-02-17 02:25:52.540746 | orchestrator | + device_owner = (known after apply) 2026-02-17 02:25:52.540753 | orchestrator | + dns_assignment = (known after apply) 2026-02-17 02:25:52.540758 | orchestrator | + dns_name = (known after apply) 2026-02-17 02:25:52.540764 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540770 | orchestrator | + mac_address = (known after apply) 2026-02-17 02:25:52.540775 | orchestrator | + network_id = (known after apply) 2026-02-17 02:25:52.540781 | orchestrator | + port_security_enabled = (known after apply) 2026-02-17 02:25:52.540786 | orchestrator | + qos_policy_id = (known after apply) 2026-02-17 02:25:52.540793 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.540802 | orchestrator | + security_group_ids = (known after apply) 2026-02-17 02:25:52.540808 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.540814 | orchestrator | 2026-02-17 02:25:52.540820 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.540825 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-17 02:25:52.540831 | orchestrator | } 2026-02-17 02:25:52.540837 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.540842 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-17 02:25:52.540848 | orchestrator | } 2026-02-17 02:25:52.540854 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.540860 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-17 02:25:52.540866 | orchestrator | } 2026-02-17 02:25:52.540871 | orchestrator | 2026-02-17 02:25:52.540878 | orchestrator | + binding (known after apply) 2026-02-17 02:25:52.540884 | orchestrator | 2026-02-17 02:25:52.540889 | orchestrator | + fixed_ip { 2026-02-17 02:25:52.540896 | orchestrator | + ip_address = "192.168.16.10" 2026-02-17 02:25:52.540901 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.540907 | orchestrator | } 2026-02-17 02:25:52.540913 | orchestrator | } 2026-02-17 02:25:52.540921 | orchestrator | 2026-02-17 02:25:52.540928 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-17 02:25:52.540934 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-17 02:25:52.540944 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.540950 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-17 02:25:52.540956 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-17 02:25:52.540961 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.540968 | orchestrator | + device_id = (known after apply) 2026-02-17 02:25:52.540973 | orchestrator | + device_owner = (known after apply) 2026-02-17 02:25:52.540979 | orchestrator | + dns_assignment = (known after apply) 2026-02-17 02:25:52.540985 | orchestrator | + dns_name = (known after apply) 2026-02-17 02:25:52.540991 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.540997 | orchestrator | + mac_address = (known after apply) 2026-02-17 02:25:52.541003 | orchestrator | + network_id = (known after apply) 2026-02-17 02:25:52.541009 | orchestrator | + port_security_enabled = (known after apply) 2026-02-17 02:25:52.541015 | orchestrator | + qos_policy_id = (known after apply) 2026-02-17 02:25:52.541021 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.541027 | orchestrator | + security_group_ids = (known after apply) 2026-02-17 02:25:52.541032 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.541038 | orchestrator | 2026-02-17 02:25:52.541044 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541050 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-17 02:25:52.541056 | orchestrator | } 2026-02-17 02:25:52.541062 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541067 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-17 02:25:52.541073 | orchestrator | } 2026-02-17 02:25:52.541079 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541084 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-17 02:25:52.541090 | orchestrator | } 2026-02-17 02:25:52.541096 | orchestrator | 2026-02-17 02:25:52.541103 | orchestrator | + binding (known after apply) 2026-02-17 02:25:52.541109 | orchestrator | 2026-02-17 02:25:52.541114 | orchestrator | + fixed_ip { 2026-02-17 02:25:52.541120 | orchestrator | + ip_address = "192.168.16.11" 2026-02-17 02:25:52.541126 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.541132 | orchestrator | } 2026-02-17 02:25:52.541138 | orchestrator | } 2026-02-17 02:25:52.541146 | orchestrator | 2026-02-17 02:25:52.541152 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-17 02:25:52.541158 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-17 02:25:52.541164 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.541170 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-17 02:25:52.541176 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-17 02:25:52.541182 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.541192 | orchestrator | + device_id = (known after apply) 2026-02-17 02:25:52.541198 | orchestrator | + device_owner = (known after apply) 2026-02-17 02:25:52.541204 | orchestrator | + dns_assignment = (known after apply) 2026-02-17 02:25:52.541210 | orchestrator | + dns_name = (known after apply) 2026-02-17 02:25:52.541216 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.541221 | orchestrator | + mac_address = (known after apply) 2026-02-17 02:25:52.541227 | orchestrator | + network_id = (known after apply) 2026-02-17 02:25:52.541233 | orchestrator | + port_security_enabled = (known after apply) 2026-02-17 02:25:52.541239 | orchestrator | + qos_policy_id = (known after apply) 2026-02-17 02:25:52.541245 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.541250 | orchestrator | + security_group_ids = (known after apply) 2026-02-17 02:25:52.541256 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.541262 | orchestrator | 2026-02-17 02:25:52.541268 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541273 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-17 02:25:52.541279 | orchestrator | } 2026-02-17 02:25:52.541285 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541291 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-17 02:25:52.541296 | orchestrator | } 2026-02-17 02:25:52.541302 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541308 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-17 02:25:52.541314 | orchestrator | } 2026-02-17 02:25:52.541321 | orchestrator | 2026-02-17 02:25:52.541326 | orchestrator | + binding (known after apply) 2026-02-17 02:25:52.541332 | orchestrator | 2026-02-17 02:25:52.541338 | orchestrator | + fixed_ip { 2026-02-17 02:25:52.541344 | orchestrator | + ip_address = "192.168.16.12" 2026-02-17 02:25:52.541350 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.541356 | orchestrator | } 2026-02-17 02:25:52.541362 | orchestrator | } 2026-02-17 02:25:52.541370 | orchestrator | 2026-02-17 02:25:52.541376 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-17 02:25:52.541382 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-17 02:25:52.541388 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.541393 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-17 02:25:52.541400 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-17 02:25:52.541406 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.541412 | orchestrator | + device_id = (known after apply) 2026-02-17 02:25:52.541418 | orchestrator | + device_owner = (known after apply) 2026-02-17 02:25:52.541424 | orchestrator | + dns_assignment = (known after apply) 2026-02-17 02:25:52.541430 | orchestrator | + dns_name = (known after apply) 2026-02-17 02:25:52.541436 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.541442 | orchestrator | + mac_address = (known after apply) 2026-02-17 02:25:52.541448 | orchestrator | + network_id = (known after apply) 2026-02-17 02:25:52.541454 | orchestrator | + port_security_enabled = (known after apply) 2026-02-17 02:25:52.541459 | orchestrator | + qos_policy_id = (known after apply) 2026-02-17 02:25:52.541465 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.541471 | orchestrator | + security_group_ids = (known after apply) 2026-02-17 02:25:52.541477 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.541484 | orchestrator | 2026-02-17 02:25:52.541489 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541495 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-17 02:25:52.541501 | orchestrator | } 2026-02-17 02:25:52.541507 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541513 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-17 02:25:52.541519 | orchestrator | } 2026-02-17 02:25:52.541525 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541531 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-17 02:25:52.541537 | orchestrator | } 2026-02-17 02:25:52.541543 | orchestrator | 2026-02-17 02:25:52.541552 | orchestrator | + binding (known after apply) 2026-02-17 02:25:52.541558 | orchestrator | 2026-02-17 02:25:52.541576 | orchestrator | + fixed_ip { 2026-02-17 02:25:52.541582 | orchestrator | + ip_address = "192.168.16.13" 2026-02-17 02:25:52.541588 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.541594 | orchestrator | } 2026-02-17 02:25:52.541600 | orchestrator | } 2026-02-17 02:25:52.541690 | orchestrator | 2026-02-17 02:25:52.541698 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-17 02:25:52.541705 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-17 02:25:52.541711 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.541717 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-17 02:25:52.541724 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-17 02:25:52.541730 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.541736 | orchestrator | + device_id = (known after apply) 2026-02-17 02:25:52.541743 | orchestrator | + device_owner = (known after apply) 2026-02-17 02:25:52.541749 | orchestrator | + dns_assignment = (known after apply) 2026-02-17 02:25:52.541755 | orchestrator | + dns_name = (known after apply) 2026-02-17 02:25:52.541765 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.541772 | orchestrator | + mac_address = (known after apply) 2026-02-17 02:25:52.541778 | orchestrator | + network_id = (known after apply) 2026-02-17 02:25:52.541784 | orchestrator | + port_security_enabled = (known after apply) 2026-02-17 02:25:52.541790 | orchestrator | + qos_policy_id = (known after apply) 2026-02-17 02:25:52.541797 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.541803 | orchestrator | + security_group_ids = (known after apply) 2026-02-17 02:25:52.541809 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.541817 | orchestrator | 2026-02-17 02:25:52.541829 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541838 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-17 02:25:52.541849 | orchestrator | } 2026-02-17 02:25:52.541855 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541861 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-17 02:25:52.541867 | orchestrator | } 2026-02-17 02:25:52.541872 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.541878 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-17 02:25:52.541884 | orchestrator | } 2026-02-17 02:25:52.541889 | orchestrator | 2026-02-17 02:25:52.541896 | orchestrator | + binding (known after apply) 2026-02-17 02:25:52.541902 | orchestrator | 2026-02-17 02:25:52.541908 | orchestrator | + fixed_ip { 2026-02-17 02:25:52.541914 | orchestrator | + ip_address = "192.168.16.14" 2026-02-17 02:25:52.541920 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.541926 | orchestrator | } 2026-02-17 02:25:52.541932 | orchestrator | } 2026-02-17 02:25:52.541940 | orchestrator | 2026-02-17 02:25:52.541947 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-17 02:25:52.541954 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-17 02:25:52.541960 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.541966 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-17 02:25:52.541973 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-17 02:25:52.541979 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.541985 | orchestrator | + device_id = (known after apply) 2026-02-17 02:25:52.541991 | orchestrator | + device_owner = (known after apply) 2026-02-17 02:25:52.541997 | orchestrator | + dns_assignment = (known after apply) 2026-02-17 02:25:52.542003 | orchestrator | + dns_name = (known after apply) 2026-02-17 02:25:52.542010 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.542034 | orchestrator | + mac_address = (known after apply) 2026-02-17 02:25:52.542041 | orchestrator | + network_id = (known after apply) 2026-02-17 02:25:52.542046 | orchestrator | + port_security_enabled = (known after apply) 2026-02-17 02:25:52.542052 | orchestrator | + qos_policy_id = (known after apply) 2026-02-17 02:25:52.542068 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.542075 | orchestrator | + security_group_ids = (known after apply) 2026-02-17 02:25:52.542081 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.542087 | orchestrator | 2026-02-17 02:25:52.542093 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.542098 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-17 02:25:52.542105 | orchestrator | } 2026-02-17 02:25:52.542112 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.542118 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-17 02:25:52.542124 | orchestrator | } 2026-02-17 02:25:52.542130 | orchestrator | + allowed_address_pairs { 2026-02-17 02:25:52.542136 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-17 02:25:52.542142 | orchestrator | } 2026-02-17 02:25:52.542148 | orchestrator | 2026-02-17 02:25:52.542155 | orchestrator | + binding (known after apply) 2026-02-17 02:25:52.542161 | orchestrator | 2026-02-17 02:25:52.542167 | orchestrator | + fixed_ip { 2026-02-17 02:25:52.542174 | orchestrator | + ip_address = "192.168.16.15" 2026-02-17 02:25:52.542180 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.542186 | orchestrator | } 2026-02-17 02:25:52.542192 | orchestrator | } 2026-02-17 02:25:52.542202 | orchestrator | 2026-02-17 02:25:52.542209 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-17 02:25:52.542215 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-17 02:25:52.542221 | orchestrator | + force_destroy = false 2026-02-17 02:25:52.542227 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.542233 | orchestrator | + port_id = (known after apply) 2026-02-17 02:25:52.542239 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.542246 | orchestrator | + router_id = (known after apply) 2026-02-17 02:25:52.542252 | orchestrator | + subnet_id = (known after apply) 2026-02-17 02:25:52.542258 | orchestrator | } 2026-02-17 02:25:52.542267 | orchestrator | 2026-02-17 02:25:52.542272 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-17 02:25:52.542278 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-17 02:25:52.542284 | orchestrator | + admin_state_up = (known after apply) 2026-02-17 02:25:52.542291 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.542297 | orchestrator | + availability_zone_hints = [ 2026-02-17 02:25:52.542303 | orchestrator | + "nova", 2026-02-17 02:25:52.542309 | orchestrator | ] 2026-02-17 02:25:52.542315 | orchestrator | + distributed = (known after apply) 2026-02-17 02:25:52.542321 | orchestrator | + enable_snat = (known after apply) 2026-02-17 02:25:52.542328 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-17 02:25:52.542335 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-17 02:25:52.542341 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.542347 | orchestrator | + name = "testbed" 2026-02-17 02:25:52.542353 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.542360 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.542366 | orchestrator | 2026-02-17 02:25:52.542373 | orchestrator | + external_fixed_ip (known after apply) 2026-02-17 02:25:52.542379 | orchestrator | } 2026-02-17 02:25:52.547744 | orchestrator | 2026-02-17 02:25:52.547791 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-17 02:25:52.547798 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-17 02:25:52.547805 | orchestrator | + description = "ssh" 2026-02-17 02:25:52.547811 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.547816 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.547823 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.547830 | orchestrator | + port_range_max = 22 2026-02-17 02:25:52.547836 | orchestrator | + port_range_min = 22 2026-02-17 02:25:52.547842 | orchestrator | + protocol = "tcp" 2026-02-17 02:25:52.547848 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.547865 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.547872 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.547878 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-17 02:25:52.547884 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.547891 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.547897 | orchestrator | } 2026-02-17 02:25:52.547909 | orchestrator | 2026-02-17 02:25:52.547915 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-17 02:25:52.547922 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-17 02:25:52.547928 | orchestrator | + description = "wireguard" 2026-02-17 02:25:52.547934 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.547940 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.547946 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.547953 | orchestrator | + port_range_max = 51820 2026-02-17 02:25:52.547959 | orchestrator | + port_range_min = 51820 2026-02-17 02:25:52.547965 | orchestrator | + protocol = "udp" 2026-02-17 02:25:52.547970 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.547977 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.547983 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.547988 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-17 02:25:52.547994 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.548000 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548006 | orchestrator | } 2026-02-17 02:25:52.548012 | orchestrator | 2026-02-17 02:25:52.548018 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-17 02:25:52.548024 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-17 02:25:52.548035 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.548041 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.548047 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548052 | orchestrator | + protocol = "tcp" 2026-02-17 02:25:52.548058 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548064 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.548069 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.548075 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-17 02:25:52.548082 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.548087 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548093 | orchestrator | } 2026-02-17 02:25:52.548098 | orchestrator | 2026-02-17 02:25:52.548104 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-17 02:25:52.548110 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-17 02:25:52.548116 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.548121 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.548127 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548133 | orchestrator | + protocol = "udp" 2026-02-17 02:25:52.548139 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548145 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.548151 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.548157 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-17 02:25:52.548163 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.548169 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548174 | orchestrator | } 2026-02-17 02:25:52.548183 | orchestrator | 2026-02-17 02:25:52.548189 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-17 02:25:52.548200 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-17 02:25:52.548205 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.548211 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.548216 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548222 | orchestrator | + protocol = "icmp" 2026-02-17 02:25:52.548228 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548234 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.548239 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.548245 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-17 02:25:52.548250 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.548256 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548262 | orchestrator | } 2026-02-17 02:25:52.548268 | orchestrator | 2026-02-17 02:25:52.548274 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-17 02:25:52.548280 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-17 02:25:52.548286 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.548291 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.548297 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548303 | orchestrator | + protocol = "tcp" 2026-02-17 02:25:52.548309 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548315 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.548322 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.548328 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-17 02:25:52.548334 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.548340 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548347 | orchestrator | } 2026-02-17 02:25:52.548353 | orchestrator | 2026-02-17 02:25:52.548360 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-17 02:25:52.548366 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-17 02:25:52.548371 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.548377 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.548383 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548389 | orchestrator | + protocol = "udp" 2026-02-17 02:25:52.548395 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548401 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.548407 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.548413 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-17 02:25:52.548420 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.548426 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548432 | orchestrator | } 2026-02-17 02:25:52.548438 | orchestrator | 2026-02-17 02:25:52.548445 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-17 02:25:52.548451 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-17 02:25:52.548457 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.548464 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.548470 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548477 | orchestrator | + protocol = "icmp" 2026-02-17 02:25:52.548483 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548489 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.548495 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.548501 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-17 02:25:52.548506 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.548512 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548522 | orchestrator | } 2026-02-17 02:25:52.548527 | orchestrator | 2026-02-17 02:25:52.548533 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-17 02:25:52.548539 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-17 02:25:52.548545 | orchestrator | + description = "vrrp" 2026-02-17 02:25:52.548551 | orchestrator | + direction = "ingress" 2026-02-17 02:25:52.548557 | orchestrator | + ethertype = "IPv4" 2026-02-17 02:25:52.548608 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548618 | orchestrator | + protocol = "112" 2026-02-17 02:25:52.548625 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548631 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-17 02:25:52.548638 | orchestrator | + remote_group_id = (known after apply) 2026-02-17 02:25:52.548643 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-17 02:25:52.548649 | orchestrator | + security_group_id = (known after apply) 2026-02-17 02:25:52.548655 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548661 | orchestrator | } 2026-02-17 02:25:52.548670 | orchestrator | 2026-02-17 02:25:52.548676 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-17 02:25:52.548682 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-17 02:25:52.548688 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.548695 | orchestrator | + description = "management security group" 2026-02-17 02:25:52.548701 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548707 | orchestrator | + name = "testbed-management" 2026-02-17 02:25:52.548713 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548719 | orchestrator | + stateful = (known after apply) 2026-02-17 02:25:52.548725 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548731 | orchestrator | } 2026-02-17 02:25:52.548738 | orchestrator | 2026-02-17 02:25:52.548744 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-17 02:25:52.548750 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-17 02:25:52.548756 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.548762 | orchestrator | + description = "node security group" 2026-02-17 02:25:52.548768 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548773 | orchestrator | + name = "testbed-node" 2026-02-17 02:25:52.548779 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548785 | orchestrator | + stateful = (known after apply) 2026-02-17 02:25:52.548791 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548797 | orchestrator | } 2026-02-17 02:25:52.548803 | orchestrator | 2026-02-17 02:25:52.548809 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-17 02:25:52.548814 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-17 02:25:52.548820 | orchestrator | + all_tags = (known after apply) 2026-02-17 02:25:52.548827 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-17 02:25:52.548833 | orchestrator | + dns_nameservers = [ 2026-02-17 02:25:52.548840 | orchestrator | + "8.8.8.8", 2026-02-17 02:25:52.548845 | orchestrator | + "9.9.9.9", 2026-02-17 02:25:52.548851 | orchestrator | ] 2026-02-17 02:25:52.548857 | orchestrator | + enable_dhcp = true 2026-02-17 02:25:52.548864 | orchestrator | + gateway_ip = (known after apply) 2026-02-17 02:25:52.548874 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548881 | orchestrator | + ip_version = 4 2026-02-17 02:25:52.548887 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-17 02:25:52.548893 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-17 02:25:52.548900 | orchestrator | + name = "subnet-testbed-management" 2026-02-17 02:25:52.548906 | orchestrator | + network_id = (known after apply) 2026-02-17 02:25:52.548912 | orchestrator | + no_gateway = false 2026-02-17 02:25:52.548918 | orchestrator | + region = (known after apply) 2026-02-17 02:25:52.548924 | orchestrator | + service_types = (known after apply) 2026-02-17 02:25:52.548935 | orchestrator | + tenant_id = (known after apply) 2026-02-17 02:25:52.548941 | orchestrator | 2026-02-17 02:25:52.548948 | orchestrator | + allocation_pool { 2026-02-17 02:25:52.548954 | orchestrator | + end = "192.168.31.250" 2026-02-17 02:25:52.548959 | orchestrator | + start = "192.168.31.200" 2026-02-17 02:25:52.548965 | orchestrator | } 2026-02-17 02:25:52.548970 | orchestrator | } 2026-02-17 02:25:52.548976 | orchestrator | 2026-02-17 02:25:52.548982 | orchestrator | # terraform_data.image will be created 2026-02-17 02:25:52.548988 | orchestrator | + resource "terraform_data" "image" { 2026-02-17 02:25:52.548993 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.548999 | orchestrator | + input = "Ubuntu 24.04" 2026-02-17 02:25:52.549004 | orchestrator | + output = (known after apply) 2026-02-17 02:25:52.549010 | orchestrator | } 2026-02-17 02:25:52.549015 | orchestrator | 2026-02-17 02:25:52.549021 | orchestrator | # terraform_data.image_node will be created 2026-02-17 02:25:52.549028 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-17 02:25:52.549033 | orchestrator | + id = (known after apply) 2026-02-17 02:25:52.549039 | orchestrator | + input = "Ubuntu 24.04" 2026-02-17 02:25:52.549045 | orchestrator | + output = (known after apply) 2026-02-17 02:25:52.549051 | orchestrator | } 2026-02-17 02:25:52.549057 | orchestrator | 2026-02-17 02:25:52.549063 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-17 02:25:52.549069 | orchestrator | 2026-02-17 02:25:52.549076 | orchestrator | Changes to Outputs: 2026-02-17 02:25:52.549082 | orchestrator | + manager_address = (sensitive value) 2026-02-17 02:25:52.549088 | orchestrator | + private_key = (sensitive value) 2026-02-17 02:25:52.793316 | orchestrator | terraform_data.image_node: Creating... 2026-02-17 02:25:52.794336 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=4ba887ab-9948-c27a-23d0-ef8d369f8492] 2026-02-17 02:25:52.796272 | orchestrator | terraform_data.image: Creating... 2026-02-17 02:25:52.797342 | orchestrator | terraform_data.image: Creation complete after 0s [id=edd60f3c-05aa-9a3c-a98b-a3c6202d3576] 2026-02-17 02:25:52.813695 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-17 02:25:52.814826 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-17 02:25:52.829731 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-17 02:25:52.829794 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-17 02:25:52.829802 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-17 02:25:52.829816 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-17 02:25:52.829822 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-17 02:25:52.838067 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-17 02:25:52.838116 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-17 02:25:52.841274 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-17 02:25:53.302472 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-17 02:25:53.310834 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-17 02:25:53.311850 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-17 02:25:53.316253 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-17 02:25:53.356912 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-17 02:25:53.368776 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-17 02:25:53.863199 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=2e6d106f-f3d7-47af-85b0-95ff20b17722] 2026-02-17 02:25:53.871556 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-17 02:25:56.424965 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=d011ea34-b61d-4f0b-ab11-4490cc68cf86] 2026-02-17 02:25:56.431208 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-17 02:25:56.435778 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=fe38296d-c093-48ca-96c0-8f602ad79427] 2026-02-17 02:25:56.442197 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-17 02:25:56.453099 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=fd9c05b9-f9ca-4e15-8356-6060fba46416] 2026-02-17 02:25:56.463439 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-17 02:25:56.465979 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=5f284eb4-05bb-45c0-8f93-4c0e151e7350] 2026-02-17 02:25:56.475655 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=16391a47-5928-45dd-a24a-c21b57e88b67] 2026-02-17 02:25:56.477391 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-17 02:25:56.481623 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-17 02:25:56.485762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3] 2026-02-17 02:25:56.491002 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-17 02:25:56.532824 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=b093f3ae-168d-469e-aca7-9106842051bc] 2026-02-17 02:25:56.543619 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-17 02:25:56.547679 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=9180336b1a5341ea475df4783c15dae04729bfa2] 2026-02-17 02:25:56.555416 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-17 02:25:56.561214 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=1b5f5072c2ddf4d8202430ffd09927d9ac3f8e3c] 2026-02-17 02:25:56.566185 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-17 02:25:56.569624 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=f250a0b0-2ca1-4b6e-93a1-cfc431f0e856] 2026-02-17 02:25:56.583601 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=18a6fd36-4eb2-4c52-9e33-394f78b6cc4d] 2026-02-17 02:25:57.211459 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=214cfdef-2253-4ef6-bb28-2ea2555c75c7] 2026-02-17 02:25:57.544553 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=5887ca26-eeb9-43ab-8202-5cfb8949d06a] 2026-02-17 02:25:57.551601 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-17 02:25:59.785597 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=3d567a40-efe3-40c8-a008-8295f8dd6e25] 2026-02-17 02:25:59.844336 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=d83a89d3-91a6-467d-8248-bfeccded0a7a] 2026-02-17 02:25:59.853372 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=95350bd6-b245-44d1-bed2-d3debca83b15] 2026-02-17 02:25:59.860008 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=f3163655-9995-491d-8d46-91e3626b16e8] 2026-02-17 02:25:59.861313 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=37d8f58a-c342-42fe-9565-ad857c4ec944] 2026-02-17 02:25:59.898847 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=69a38e66-d857-4b93-85c9-a75df11f4978] 2026-02-17 02:26:00.707337 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=e800e745-3e5a-456d-b4c0-877e55616fc3] 2026-02-17 02:26:00.720228 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-17 02:26:00.720425 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-17 02:26:00.720556 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-17 02:26:00.895692 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=ed5d81ea-122e-447d-911d-7775524ee3ed] 2026-02-17 02:26:00.904262 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-17 02:26:00.904858 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-17 02:26:00.905560 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-17 02:26:00.905944 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-17 02:26:00.908474 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-17 02:26:00.910298 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-17 02:26:00.926435 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ac03a62b-3f39-4598-8cea-338bdae6eb1d] 2026-02-17 02:26:00.931924 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-17 02:26:00.932731 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-17 02:26:00.933177 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-17 02:26:01.063551 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=3c75245c-bf7f-4b0b-bc03-f1b1a48a4961] 2026-02-17 02:26:01.067872 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-17 02:26:01.075690 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=f9579f52-e657-4550-ba15-579fb7caa732] 2026-02-17 02:26:01.086396 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-17 02:26:01.229214 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=2a53edb4-b3bc-4388-acf1-51beda2ffcce] 2026-02-17 02:26:01.236064 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-17 02:26:01.418857 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=d9b12661-35c3-460e-bb35-907e542314bc] 2026-02-17 02:26:01.426506 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-17 02:26:01.477188 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=53f49e28-7f16-4b73-b56e-bfcdd8161901] 2026-02-17 02:26:01.484815 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-17 02:26:01.560879 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=8cf924c8-726f-419d-85cd-447215e794ce] 2026-02-17 02:26:01.571501 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-17 02:26:01.585560 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=cc97a4ae-1462-47cd-90e2-8bd945951918] 2026-02-17 02:26:01.593462 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-17 02:26:01.693031 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=bc169d0f-fba7-4580-bf60-c46c2aa77886] 2026-02-17 02:26:01.892981 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=629bd047-bdd9-4ffd-948e-ca3eb0ef0874] 2026-02-17 02:26:01.924150 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=45f277ca-7c07-4bd4-b8af-94e234f06e1d] 2026-02-17 02:26:02.028691 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=a4ee8765-a0a0-4786-ba81-f24c727e3b13] 2026-02-17 02:26:02.053403 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ccc3206f-8554-4d4b-9a88-b09be47e7db3] 2026-02-17 02:26:02.115474 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=719c041d-5be7-441e-aad7-bde693c77e48] 2026-02-17 02:26:02.126402 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=3e6ebf1e-d7ec-4c5b-9397-f0125312d223] 2026-02-17 02:26:02.175290 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=af89dfba-3e91-4573-9160-a604b2f03ae7] 2026-02-17 02:26:02.282611 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=b0e534a3-1926-4a67-9735-234a2bc1559e] 2026-02-17 02:26:03.001505 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=c670487e-9718-4a3a-a4e9-85655fb01413] 2026-02-17 02:26:03.017814 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-17 02:26:03.031152 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-17 02:26:03.033265 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-17 02:26:03.033897 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-17 02:26:03.039343 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-17 02:26:03.052657 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-17 02:26:03.058269 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-17 02:26:04.753889 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=f1174feb-2f1c-440b-8cbd-4bb0cf8ad8d0] 2026-02-17 02:26:04.759436 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-17 02:26:04.763129 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-17 02:26:04.765239 | orchestrator | local_file.inventory: Creating... 2026-02-17 02:26:04.767082 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=14ef5138cb21fe97af65a5f7d59e35642e555d1b] 2026-02-17 02:26:04.771095 | orchestrator | local_file.inventory: Creation complete after 0s [id=0745bb985be2d0245583a007628599558bbe50ed] 2026-02-17 02:26:05.575072 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=f1174feb-2f1c-440b-8cbd-4bb0cf8ad8d0] 2026-02-17 02:26:13.032264 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-17 02:26:13.034647 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-17 02:26:13.034715 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-17 02:26:13.048299 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-17 02:26:13.053471 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-17 02:26:13.058650 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-17 02:26:23.040684 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-17 02:26:23.040778 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-17 02:26:23.040792 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-17 02:26:23.048941 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-17 02:26:23.054293 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-17 02:26:23.059450 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-17 02:26:23.500825 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=bb9d32ca-1ac5-46d7-9ef2-85a088b943ef] 2026-02-17 02:26:23.511970 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=811e4fa2-dbdc-4732-bdbf-3c42a2344147] 2026-02-17 02:26:23.544351 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=9e74e133-638e-486d-be13-748fd9c78d27] 2026-02-17 02:26:23.585640 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=6629e832-611e-434e-9a88-b28cea542c42] 2026-02-17 02:26:33.041255 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-17 02:26:33.054486 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-17 02:26:33.627329 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=2b315565-950e-4643-a888-4d81f4338f03] 2026-02-17 02:26:33.762543 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=ade2c35d-7339-4c5f-9619-4823d1965451] 2026-02-17 02:26:33.768885 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-17 02:26:33.796245 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4820780443500266750] 2026-02-17 02:26:33.797527 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-17 02:26:33.798744 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-17 02:26:33.799179 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-17 02:26:33.802989 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-17 02:26:33.803140 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-17 02:26:33.806975 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-17 02:26:33.810511 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-17 02:26:33.814503 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-17 02:26:33.835783 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-17 02:26:33.840627 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-17 02:26:37.192460 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=6629e832-611e-434e-9a88-b28cea542c42/b093f3ae-168d-469e-aca7-9106842051bc] 2026-02-17 02:26:37.192913 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=811e4fa2-dbdc-4732-bdbf-3c42a2344147/ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3] 2026-02-17 02:26:37.215314 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=ade2c35d-7339-4c5f-9619-4823d1965451/fd9c05b9-f9ca-4e15-8356-6060fba46416] 2026-02-17 02:26:37.227459 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=811e4fa2-dbdc-4732-bdbf-3c42a2344147/5f284eb4-05bb-45c0-8f93-4c0e151e7350] 2026-02-17 02:26:37.231643 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=6629e832-611e-434e-9a88-b28cea542c42/18a6fd36-4eb2-4c52-9e33-394f78b6cc4d] 2026-02-17 02:26:37.244609 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=ade2c35d-7339-4c5f-9619-4823d1965451/16391a47-5928-45dd-a24a-c21b57e88b67] 2026-02-17 02:26:43.315860 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=6629e832-611e-434e-9a88-b28cea542c42/d011ea34-b61d-4f0b-ab11-4490cc68cf86] 2026-02-17 02:26:43.315943 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=811e4fa2-dbdc-4732-bdbf-3c42a2344147/fe38296d-c093-48ca-96c0-8f602ad79427] 2026-02-17 02:26:43.346674 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=ade2c35d-7339-4c5f-9619-4823d1965451/f250a0b0-2ca1-4b6e-93a1-cfc431f0e856] 2026-02-17 02:26:43.837647 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-17 02:26:53.838375 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-17 02:26:54.196517 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=3fabd0b4-e63e-4400-85b9-822b4b14c9b3] 2026-02-17 02:26:54.226658 | orchestrator | 2026-02-17 02:26:54.226730 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-17 02:26:54.226738 | orchestrator | 2026-02-17 02:26:54.226743 | orchestrator | Outputs: 2026-02-17 02:26:54.226748 | orchestrator | 2026-02-17 02:26:54.226754 | orchestrator | manager_address = 2026-02-17 02:26:54.226759 | orchestrator | private_key = 2026-02-17 02:26:54.731622 | orchestrator | ok: Runtime: 0:01:10.060532 2026-02-17 02:26:54.767001 | 2026-02-17 02:26:54.767145 | TASK [Fetch manager address] 2026-02-17 02:26:55.299996 | orchestrator | ok 2026-02-17 02:26:55.314608 | 2026-02-17 02:26:55.314880 | TASK [Set manager_host address] 2026-02-17 02:26:55.377723 | orchestrator | ok 2026-02-17 02:26:55.384918 | 2026-02-17 02:26:55.385040 | LOOP [Update ansible collections] 2026-02-17 02:26:57.533834 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-17 02:26:57.534229 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-17 02:26:57.534296 | orchestrator | Starting galaxy collection install process 2026-02-17 02:26:57.534340 | orchestrator | Process install dependency map 2026-02-17 02:26:57.534380 | orchestrator | Starting collection install process 2026-02-17 02:26:57.534415 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-17 02:26:57.534456 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-17 02:26:57.534500 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-17 02:26:57.534588 | orchestrator | ok: Item: commons Runtime: 0:00:01.750679 2026-02-17 02:26:58.949946 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-17 02:26:58.950127 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-17 02:26:58.950199 | orchestrator | Starting galaxy collection install process 2026-02-17 02:26:58.950243 | orchestrator | Process install dependency map 2026-02-17 02:26:58.950281 | orchestrator | Starting collection install process 2026-02-17 02:26:58.950316 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-17 02:26:58.950352 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-17 02:26:58.950386 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-17 02:26:58.950440 | orchestrator | ok: Item: services Runtime: 0:00:01.075062 2026-02-17 02:26:58.970761 | 2026-02-17 02:26:58.970959 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-17 02:27:09.679673 | orchestrator | ok 2026-02-17 02:27:09.698051 | 2026-02-17 02:27:09.698305 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-17 02:28:09.749699 | orchestrator | ok 2026-02-17 02:28:09.762043 | 2026-02-17 02:28:09.762229 | TASK [Fetch manager ssh hostkey] 2026-02-17 02:28:11.339480 | orchestrator | Output suppressed because no_log was given 2026-02-17 02:28:11.356066 | 2026-02-17 02:28:11.356280 | TASK [Get ssh keypair from terraform environment] 2026-02-17 02:28:11.893619 | orchestrator | ok: Runtime: 0:00:00.006776 2026-02-17 02:28:11.911555 | 2026-02-17 02:28:11.911727 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-17 02:28:11.960498 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-17 02:28:11.970640 | 2026-02-17 02:28:11.970768 | TASK [Run manager part 0] 2026-02-17 02:28:13.589023 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-17 02:28:13.724480 | orchestrator | 2026-02-17 02:28:13.724561 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-17 02:28:13.724572 | orchestrator | 2026-02-17 02:28:13.724589 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-17 02:28:15.738266 | orchestrator | ok: [testbed-manager] 2026-02-17 02:28:15.738330 | orchestrator | 2026-02-17 02:28:15.738359 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-17 02:28:15.738370 | orchestrator | 2026-02-17 02:28:15.738382 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:28:17.949859 | orchestrator | ok: [testbed-manager] 2026-02-17 02:28:17.949930 | orchestrator | 2026-02-17 02:28:17.949939 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-17 02:28:18.685456 | orchestrator | ok: [testbed-manager] 2026-02-17 02:28:18.685524 | orchestrator | 2026-02-17 02:28:18.685533 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-17 02:28:18.746368 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:28:18.746440 | orchestrator | 2026-02-17 02:28:18.746455 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-17 02:28:18.779603 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:28:18.779673 | orchestrator | 2026-02-17 02:28:18.779686 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-17 02:28:18.808961 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:28:18.809018 | orchestrator | 2026-02-17 02:28:18.809024 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-17 02:28:18.839034 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:28:18.839087 | orchestrator | 2026-02-17 02:28:18.839093 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-17 02:28:18.870127 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:28:18.870188 | orchestrator | 2026-02-17 02:28:18.870196 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-17 02:28:18.906309 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:28:18.906375 | orchestrator | 2026-02-17 02:28:18.906383 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-17 02:28:18.938982 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:28:18.939035 | orchestrator | 2026-02-17 02:28:18.939042 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-17 02:28:19.707368 | orchestrator | changed: [testbed-manager] 2026-02-17 02:28:19.707482 | orchestrator | 2026-02-17 02:28:19.707490 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-17 02:31:03.477428 | orchestrator | changed: [testbed-manager] 2026-02-17 02:31:03.477492 | orchestrator | 2026-02-17 02:31:03.477508 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-17 02:32:46.306254 | orchestrator | changed: [testbed-manager] 2026-02-17 02:32:46.306302 | orchestrator | 2026-02-17 02:32:46.306310 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-17 02:33:12.517943 | orchestrator | changed: [testbed-manager] 2026-02-17 02:33:12.518143 | orchestrator | 2026-02-17 02:33:12.518170 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-17 02:33:22.834368 | orchestrator | changed: [testbed-manager] 2026-02-17 02:33:22.834419 | orchestrator | 2026-02-17 02:33:22.834430 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-17 02:33:22.881873 | orchestrator | ok: [testbed-manager] 2026-02-17 02:33:22.881914 | orchestrator | 2026-02-17 02:33:22.881922 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-17 02:33:23.729872 | orchestrator | ok: [testbed-manager] 2026-02-17 02:33:23.729909 | orchestrator | 2026-02-17 02:33:23.729916 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-17 02:33:24.508606 | orchestrator | changed: [testbed-manager] 2026-02-17 02:33:24.508663 | orchestrator | 2026-02-17 02:33:24.508678 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-17 02:33:31.588942 | orchestrator | changed: [testbed-manager] 2026-02-17 02:33:31.589026 | orchestrator | 2026-02-17 02:33:31.589069 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-17 02:33:38.351273 | orchestrator | changed: [testbed-manager] 2026-02-17 02:33:38.351380 | orchestrator | 2026-02-17 02:33:38.351398 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-17 02:33:41.232803 | orchestrator | changed: [testbed-manager] 2026-02-17 02:33:41.232901 | orchestrator | 2026-02-17 02:33:41.232917 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-17 02:33:43.224338 | orchestrator | changed: [testbed-manager] 2026-02-17 02:33:43.225026 | orchestrator | 2026-02-17 02:33:43.225057 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-17 02:33:44.384386 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-17 02:33:44.384449 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-17 02:33:44.384460 | orchestrator | 2026-02-17 02:33:44.384470 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-17 02:33:44.433672 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-17 02:33:44.433759 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-17 02:33:44.433772 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-17 02:33:44.433784 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-17 02:33:51.956651 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-17 02:33:51.956723 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-17 02:33:51.956738 | orchestrator | 2026-02-17 02:33:51.956750 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-17 02:33:52.542880 | orchestrator | changed: [testbed-manager] 2026-02-17 02:33:52.542967 | orchestrator | 2026-02-17 02:33:52.542981 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-17 02:34:13.277699 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-17 02:34:13.277830 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-17 02:34:13.277861 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-17 02:34:13.277882 | orchestrator | 2026-02-17 02:34:13.277902 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-17 02:34:15.770542 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-17 02:34:15.770668 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-17 02:34:15.770684 | orchestrator | 2026-02-17 02:34:15.770696 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-17 02:34:15.770709 | orchestrator | 2026-02-17 02:34:15.770721 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:34:17.245969 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:17.246006 | orchestrator | 2026-02-17 02:34:17.246058 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-17 02:34:17.297439 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:17.297535 | orchestrator | 2026-02-17 02:34:17.297550 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-17 02:34:17.376483 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:17.376624 | orchestrator | 2026-02-17 02:34:17.376645 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-17 02:34:18.201456 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:18.201546 | orchestrator | 2026-02-17 02:34:18.201560 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-17 02:34:18.944974 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:18.945073 | orchestrator | 2026-02-17 02:34:18.945090 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-17 02:34:20.381256 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-17 02:34:20.381344 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-17 02:34:20.381358 | orchestrator | 2026-02-17 02:34:20.381381 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-17 02:34:21.869237 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:21.869357 | orchestrator | 2026-02-17 02:34:21.869373 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-17 02:34:23.680011 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-17 02:34:23.680147 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-17 02:34:23.680167 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-17 02:34:23.680180 | orchestrator | 2026-02-17 02:34:23.680195 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-17 02:34:23.731211 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:23.731273 | orchestrator | 2026-02-17 02:34:23.731289 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-17 02:34:23.798987 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:23.799058 | orchestrator | 2026-02-17 02:34:23.799068 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-17 02:34:24.410258 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:24.410431 | orchestrator | 2026-02-17 02:34:24.410454 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-17 02:34:24.484166 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:24.484210 | orchestrator | 2026-02-17 02:34:24.484218 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-17 02:34:25.419014 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-17 02:34:25.419070 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:25.419085 | orchestrator | 2026-02-17 02:34:25.419095 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-17 02:34:25.452861 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:25.452903 | orchestrator | 2026-02-17 02:34:25.452912 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-17 02:34:25.485224 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:25.485263 | orchestrator | 2026-02-17 02:34:25.485271 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-17 02:34:25.524984 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:25.525034 | orchestrator | 2026-02-17 02:34:25.525047 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-17 02:34:25.597791 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:25.597834 | orchestrator | 2026-02-17 02:34:25.597843 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-17 02:34:26.348808 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:26.348882 | orchestrator | 2026-02-17 02:34:26.348892 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-17 02:34:26.348901 | orchestrator | 2026-02-17 02:34:26.348908 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:34:27.890936 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:27.890971 | orchestrator | 2026-02-17 02:34:27.890977 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-17 02:34:28.842466 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:28.842649 | orchestrator | 2026-02-17 02:34:28.842662 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:34:28.842668 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-17 02:34:28.842674 | orchestrator | 2026-02-17 02:34:29.238950 | orchestrator | ok: Runtime: 0:06:16.651649 2026-02-17 02:34:29.265599 | 2026-02-17 02:34:29.265786 | TASK [Point out that the log in on the manager is now possible] 2026-02-17 02:34:29.298390 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-17 02:34:29.305685 | 2026-02-17 02:34:29.305795 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-17 02:34:29.346954 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-17 02:34:29.354217 | 2026-02-17 02:34:29.354333 | TASK [Run manager part 1 + 2] 2026-02-17 02:34:30.223531 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-17 02:34:30.284664 | orchestrator | 2026-02-17 02:34:30.284717 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-17 02:34:30.284725 | orchestrator | 2026-02-17 02:34:30.284739 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:34:32.919090 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:32.919141 | orchestrator | 2026-02-17 02:34:32.919162 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-17 02:34:32.965077 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:32.965136 | orchestrator | 2026-02-17 02:34:32.965146 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-17 02:34:33.013340 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:33.013391 | orchestrator | 2026-02-17 02:34:33.013400 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-17 02:34:33.063604 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:33.063662 | orchestrator | 2026-02-17 02:34:33.063672 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-17 02:34:33.143700 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:33.143762 | orchestrator | 2026-02-17 02:34:33.143773 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-17 02:34:33.213442 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:33.213491 | orchestrator | 2026-02-17 02:34:33.213498 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-17 02:34:33.271113 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-17 02:34:33.271164 | orchestrator | 2026-02-17 02:34:33.271171 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-17 02:34:34.072292 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:34.072359 | orchestrator | 2026-02-17 02:34:34.072370 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-17 02:34:34.123251 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:34.123306 | orchestrator | 2026-02-17 02:34:34.123314 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-17 02:34:35.581673 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:35.581741 | orchestrator | 2026-02-17 02:34:35.581753 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-17 02:34:36.209275 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:36.209330 | orchestrator | 2026-02-17 02:34:36.209338 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-17 02:34:37.411166 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:37.411244 | orchestrator | 2026-02-17 02:34:37.411262 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-17 02:34:53.766578 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:53.766800 | orchestrator | 2026-02-17 02:34:53.766820 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-17 02:34:54.496497 | orchestrator | ok: [testbed-manager] 2026-02-17 02:34:54.496579 | orchestrator | 2026-02-17 02:34:54.496619 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-17 02:34:54.546367 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:34:54.546448 | orchestrator | 2026-02-17 02:34:54.546459 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-17 02:34:55.521783 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:55.521829 | orchestrator | 2026-02-17 02:34:55.521838 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-17 02:34:56.586991 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:56.587068 | orchestrator | 2026-02-17 02:34:56.587077 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-17 02:34:57.159386 | orchestrator | changed: [testbed-manager] 2026-02-17 02:34:57.159457 | orchestrator | 2026-02-17 02:34:57.159465 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-17 02:34:57.207123 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-17 02:34:57.207249 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-17 02:34:57.207273 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-17 02:34:57.207289 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-17 02:35:01.419007 | orchestrator | changed: [testbed-manager] 2026-02-17 02:35:01.419053 | orchestrator | 2026-02-17 02:35:01.419060 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-17 02:35:11.089206 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-17 02:35:11.089330 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-17 02:35:11.089343 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-17 02:35:11.089353 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-17 02:35:11.089370 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-17 02:35:11.089377 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-17 02:35:11.089384 | orchestrator | 2026-02-17 02:35:11.089391 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-17 02:35:12.307515 | orchestrator | changed: [testbed-manager] 2026-02-17 02:35:12.307649 | orchestrator | 2026-02-17 02:35:12.307667 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-17 02:35:12.358636 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:35:12.358775 | orchestrator | 2026-02-17 02:35:12.358794 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-17 02:35:15.609296 | orchestrator | changed: [testbed-manager] 2026-02-17 02:35:15.609398 | orchestrator | 2026-02-17 02:35:15.609412 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-17 02:35:15.653243 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:35:15.653359 | orchestrator | 2026-02-17 02:35:15.653384 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-17 02:37:08.404473 | orchestrator | changed: [testbed-manager] 2026-02-17 02:37:08.404555 | orchestrator | 2026-02-17 02:37:08.404568 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-17 02:37:09.681955 | orchestrator | ok: [testbed-manager] 2026-02-17 02:37:09.682070 | orchestrator | 2026-02-17 02:37:09.682084 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:37:09.682093 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-17 02:37:09.682101 | orchestrator | 2026-02-17 02:37:09.994057 | orchestrator | ok: Runtime: 0:02:40.151180 2026-02-17 02:37:10.010795 | 2026-02-17 02:37:10.011008 | TASK [Reboot manager] 2026-02-17 02:37:11.547660 | orchestrator | ok: Runtime: 0:00:00.955040 2026-02-17 02:37:11.565379 | 2026-02-17 02:37:11.565560 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-17 02:37:28.239753 | orchestrator | ok 2026-02-17 02:37:28.250767 | 2026-02-17 02:37:28.250932 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-17 02:38:28.297168 | orchestrator | ok 2026-02-17 02:38:28.307590 | 2026-02-17 02:38:28.307745 | TASK [Deploy manager + bootstrap nodes] 2026-02-17 02:38:31.133781 | orchestrator | 2026-02-17 02:38:31.133994 | orchestrator | # DEPLOY MANAGER 2026-02-17 02:38:31.134080 | orchestrator | 2026-02-17 02:38:31.134102 | orchestrator | + set -e 2026-02-17 02:38:31.134119 | orchestrator | + echo 2026-02-17 02:38:31.134136 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-17 02:38:31.134157 | orchestrator | + echo 2026-02-17 02:38:31.134213 | orchestrator | + cat /opt/manager-vars.sh 2026-02-17 02:38:31.137684 | orchestrator | export NUMBER_OF_NODES=6 2026-02-17 02:38:31.137827 | orchestrator | 2026-02-17 02:38:31.137845 | orchestrator | export CEPH_VERSION=reef 2026-02-17 02:38:31.137859 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-17 02:38:31.137872 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-17 02:38:31.137900 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-17 02:38:31.137911 | orchestrator | 2026-02-17 02:38:31.137929 | orchestrator | export ARA=false 2026-02-17 02:38:31.137941 | orchestrator | export DEPLOY_MODE=manager 2026-02-17 02:38:31.137959 | orchestrator | export TEMPEST=false 2026-02-17 02:38:31.137971 | orchestrator | export IS_ZUUL=true 2026-02-17 02:38:31.137982 | orchestrator | 2026-02-17 02:38:31.137999 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 02:38:31.138012 | orchestrator | export EXTERNAL_API=false 2026-02-17 02:38:31.138072 | orchestrator | 2026-02-17 02:38:31.138086 | orchestrator | export IMAGE_USER=ubuntu 2026-02-17 02:38:31.138101 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-17 02:38:31.138114 | orchestrator | 2026-02-17 02:38:31.138125 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-17 02:38:31.138149 | orchestrator | 2026-02-17 02:38:31.138161 | orchestrator | + echo 2026-02-17 02:38:31.138174 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 02:38:31.139252 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 02:38:31.139310 | orchestrator | ++ INTERACTIVE=false 2026-02-17 02:38:31.139323 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 02:38:31.139335 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 02:38:31.139381 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 02:38:31.139393 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 02:38:31.139401 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 02:38:31.139407 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 02:38:31.139413 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 02:38:31.139420 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 02:38:31.139426 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 02:38:31.139433 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 02:38:31.139439 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 02:38:31.139445 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 02:38:31.139462 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 02:38:31.139469 | orchestrator | ++ export ARA=false 2026-02-17 02:38:31.139476 | orchestrator | ++ ARA=false 2026-02-17 02:38:31.139482 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 02:38:31.139488 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 02:38:31.139494 | orchestrator | ++ export TEMPEST=false 2026-02-17 02:38:31.139500 | orchestrator | ++ TEMPEST=false 2026-02-17 02:38:31.139506 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 02:38:31.139512 | orchestrator | ++ IS_ZUUL=true 2026-02-17 02:38:31.139518 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 02:38:31.139525 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 02:38:31.139531 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 02:38:31.139537 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 02:38:31.139543 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 02:38:31.139549 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 02:38:31.139555 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 02:38:31.139562 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 02:38:31.139568 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 02:38:31.139574 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 02:38:31.139580 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-17 02:38:31.203339 | orchestrator | + docker version 2026-02-17 02:38:31.318363 | orchestrator | Client: Docker Engine - Community 2026-02-17 02:38:31.318456 | orchestrator | Version: 27.5.1 2026-02-17 02:38:31.318624 | orchestrator | API version: 1.47 2026-02-17 02:38:31.318640 | orchestrator | Go version: go1.22.11 2026-02-17 02:38:31.318649 | orchestrator | Git commit: 9f9e405 2026-02-17 02:38:31.318658 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-17 02:38:31.318668 | orchestrator | OS/Arch: linux/amd64 2026-02-17 02:38:31.318677 | orchestrator | Context: default 2026-02-17 02:38:31.318685 | orchestrator | 2026-02-17 02:38:31.318696 | orchestrator | Server: Docker Engine - Community 2026-02-17 02:38:31.318705 | orchestrator | Engine: 2026-02-17 02:38:31.318775 | orchestrator | Version: 27.5.1 2026-02-17 02:38:31.318786 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-17 02:38:31.318820 | orchestrator | Go version: go1.22.11 2026-02-17 02:38:31.318829 | orchestrator | Git commit: 4c9b3b0 2026-02-17 02:38:31.318838 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-17 02:38:31.318846 | orchestrator | OS/Arch: linux/amd64 2026-02-17 02:38:31.318855 | orchestrator | Experimental: false 2026-02-17 02:38:31.318863 | orchestrator | containerd: 2026-02-17 02:38:31.318872 | orchestrator | Version: v2.2.1 2026-02-17 02:38:31.318881 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-17 02:38:31.318890 | orchestrator | runc: 2026-02-17 02:38:31.318898 | orchestrator | Version: 1.3.4 2026-02-17 02:38:31.318913 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-17 02:38:31.318922 | orchestrator | docker-init: 2026-02-17 02:38:31.318930 | orchestrator | Version: 0.19.0 2026-02-17 02:38:31.318939 | orchestrator | GitCommit: de40ad0 2026-02-17 02:38:31.322744 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-17 02:38:31.330783 | orchestrator | + set -e 2026-02-17 02:38:31.330836 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 02:38:31.330843 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 02:38:31.331868 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 02:38:31.331884 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 02:38:31.331889 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 02:38:31.331894 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 02:38:31.331901 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 02:38:31.331906 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 02:38:31.331912 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 02:38:31.331916 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 02:38:31.331921 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 02:38:31.331926 | orchestrator | ++ export ARA=false 2026-02-17 02:38:31.331931 | orchestrator | ++ ARA=false 2026-02-17 02:38:31.331935 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 02:38:31.331940 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 02:38:31.331944 | orchestrator | ++ export TEMPEST=false 2026-02-17 02:38:31.331948 | orchestrator | ++ TEMPEST=false 2026-02-17 02:38:31.331953 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 02:38:31.331957 | orchestrator | ++ IS_ZUUL=true 2026-02-17 02:38:31.331961 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 02:38:31.331966 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 02:38:31.331970 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 02:38:31.331974 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 02:38:31.331979 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 02:38:31.331983 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 02:38:31.331987 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 02:38:31.331992 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 02:38:31.331996 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 02:38:31.332000 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 02:38:31.332005 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 02:38:31.332009 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 02:38:31.332013 | orchestrator | ++ INTERACTIVE=false 2026-02-17 02:38:31.332018 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 02:38:31.332025 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 02:38:31.332030 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-17 02:38:31.332034 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-17 02:38:31.335776 | orchestrator | + set -e 2026-02-17 02:38:31.335870 | orchestrator | + VERSION=9.5.0 2026-02-17 02:38:31.335888 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-17 02:38:31.342675 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-17 02:38:31.342772 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-17 02:38:31.345407 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-17 02:38:31.351105 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-17 02:38:31.365112 | orchestrator | /opt/configuration ~ 2026-02-17 02:38:31.365198 | orchestrator | + set -e 2026-02-17 02:38:31.365213 | orchestrator | + pushd /opt/configuration 2026-02-17 02:38:31.365225 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-17 02:38:31.367467 | orchestrator | + source /opt/venv/bin/activate 2026-02-17 02:38:31.368899 | orchestrator | ++ deactivate nondestructive 2026-02-17 02:38:31.368926 | orchestrator | ++ '[' -n '' ']' 2026-02-17 02:38:31.368936 | orchestrator | ++ '[' -n '' ']' 2026-02-17 02:38:31.368962 | orchestrator | ++ hash -r 2026-02-17 02:38:31.368969 | orchestrator | ++ '[' -n '' ']' 2026-02-17 02:38:31.368975 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-17 02:38:31.368981 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-17 02:38:31.368988 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-17 02:38:31.368996 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-17 02:38:31.369008 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-17 02:38:31.369014 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-17 02:38:31.369020 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-17 02:38:31.369028 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 02:38:31.369035 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 02:38:31.369062 | orchestrator | ++ export PATH 2026-02-17 02:38:31.369070 | orchestrator | ++ '[' -n '' ']' 2026-02-17 02:38:31.369076 | orchestrator | ++ '[' -z '' ']' 2026-02-17 02:38:31.369082 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-17 02:38:31.369088 | orchestrator | ++ PS1='(venv) ' 2026-02-17 02:38:31.369094 | orchestrator | ++ export PS1 2026-02-17 02:38:31.369101 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-17 02:38:31.369107 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-17 02:38:31.369113 | orchestrator | ++ hash -r 2026-02-17 02:38:31.369314 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-17 02:38:32.924159 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-17 02:38:32.925829 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-17 02:38:32.927520 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-17 02:38:32.929141 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-17 02:38:32.930708 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-17 02:38:32.941273 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-17 02:38:32.942995 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-17 02:38:32.943789 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-17 02:38:32.945143 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-17 02:38:32.981215 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-17 02:38:32.982804 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-17 02:38:32.984483 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-17 02:38:32.985908 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-17 02:38:32.989865 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-17 02:38:33.224397 | orchestrator | ++ which gilt 2026-02-17 02:38:33.228414 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-17 02:38:33.228485 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-17 02:38:33.496851 | orchestrator | osism.cfg-generics: 2026-02-17 02:38:33.663103 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-17 02:38:33.663350 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-17 02:38:33.663395 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-17 02:38:33.663415 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-17 02:38:34.341162 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-17 02:38:34.349518 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-17 02:38:34.697234 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-17 02:38:34.756998 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-17 02:38:34.757156 | orchestrator | + deactivate 2026-02-17 02:38:34.757175 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-17 02:38:34.757191 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 02:38:34.757202 | orchestrator | + export PATH 2026-02-17 02:38:34.757214 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-17 02:38:34.757225 | orchestrator | + '[' -n '' ']' 2026-02-17 02:38:34.757240 | orchestrator | + hash -r 2026-02-17 02:38:34.757251 | orchestrator | + '[' -n '' ']' 2026-02-17 02:38:34.757262 | orchestrator | + unset VIRTUAL_ENV 2026-02-17 02:38:34.757274 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-17 02:38:34.757285 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-17 02:38:34.757312 | orchestrator | + unset -f deactivate 2026-02-17 02:38:34.757325 | orchestrator | ~ 2026-02-17 02:38:34.757336 | orchestrator | + popd 2026-02-17 02:38:34.759038 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-17 02:38:34.759110 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-17 02:38:34.760139 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-17 02:38:34.822894 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 02:38:34.823032 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-17 02:38:34.823757 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-17 02:38:34.887528 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-17 02:38:34.887791 | orchestrator | ++ semver 2024.2 2025.1 2026-02-17 02:38:34.950619 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-17 02:38:34.950997 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-17 02:38:35.050202 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-17 02:38:35.050283 | orchestrator | + source /opt/venv/bin/activate 2026-02-17 02:38:35.050290 | orchestrator | ++ deactivate nondestructive 2026-02-17 02:38:35.050313 | orchestrator | ++ '[' -n '' ']' 2026-02-17 02:38:35.050317 | orchestrator | ++ '[' -n '' ']' 2026-02-17 02:38:35.050321 | orchestrator | ++ hash -r 2026-02-17 02:38:35.050340 | orchestrator | ++ '[' -n '' ']' 2026-02-17 02:38:35.050345 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-17 02:38:35.050349 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-17 02:38:35.050353 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-17 02:38:35.050359 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-17 02:38:35.050363 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-17 02:38:35.050367 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-17 02:38:35.050371 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-17 02:38:35.050395 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 02:38:35.050417 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 02:38:35.050422 | orchestrator | ++ export PATH 2026-02-17 02:38:35.050426 | orchestrator | ++ '[' -n '' ']' 2026-02-17 02:38:35.050429 | orchestrator | ++ '[' -z '' ']' 2026-02-17 02:38:35.050433 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-17 02:38:35.050437 | orchestrator | ++ PS1='(venv) ' 2026-02-17 02:38:35.050443 | orchestrator | ++ export PS1 2026-02-17 02:38:35.050447 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-17 02:38:35.050910 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-17 02:38:35.050925 | orchestrator | ++ hash -r 2026-02-17 02:38:35.050932 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-17 02:38:36.535246 | orchestrator | 2026-02-17 02:38:36.535381 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-17 02:38:36.535408 | orchestrator | 2026-02-17 02:38:36.535426 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-17 02:38:37.142108 | orchestrator | ok: [testbed-manager] 2026-02-17 02:38:37.142220 | orchestrator | 2026-02-17 02:38:37.142238 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-17 02:38:38.193979 | orchestrator | changed: [testbed-manager] 2026-02-17 02:38:38.194116 | orchestrator | 2026-02-17 02:38:38.194125 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-17 02:38:38.194152 | orchestrator | 2026-02-17 02:38:38.194157 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:38:40.648044 | orchestrator | ok: [testbed-manager] 2026-02-17 02:38:40.648147 | orchestrator | 2026-02-17 02:38:40.648161 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-17 02:38:40.707123 | orchestrator | ok: [testbed-manager] 2026-02-17 02:38:40.707218 | orchestrator | 2026-02-17 02:38:40.707231 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-17 02:38:41.183133 | orchestrator | changed: [testbed-manager] 2026-02-17 02:38:41.183233 | orchestrator | 2026-02-17 02:38:41.183251 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-17 02:38:41.225590 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:38:41.225667 | orchestrator | 2026-02-17 02:38:41.225675 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-17 02:38:41.597129 | orchestrator | changed: [testbed-manager] 2026-02-17 02:38:41.597222 | orchestrator | 2026-02-17 02:38:41.597237 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-17 02:38:41.934009 | orchestrator | ok: [testbed-manager] 2026-02-17 02:38:41.934155 | orchestrator | 2026-02-17 02:38:41.934166 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-17 02:38:42.062382 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:38:42.062484 | orchestrator | 2026-02-17 02:38:42.062508 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-17 02:38:42.062530 | orchestrator | 2026-02-17 02:38:42.062550 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:38:43.941293 | orchestrator | ok: [testbed-manager] 2026-02-17 02:38:43.941391 | orchestrator | 2026-02-17 02:38:43.941409 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-17 02:38:44.073293 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-17 02:38:44.073437 | orchestrator | 2026-02-17 02:38:44.073465 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-17 02:38:44.151271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-17 02:38:44.151371 | orchestrator | 2026-02-17 02:38:44.151390 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-17 02:38:45.349467 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-17 02:38:45.349542 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-17 02:38:45.349549 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-17 02:38:45.349553 | orchestrator | 2026-02-17 02:38:45.349561 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-17 02:38:47.364589 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-17 02:38:47.364683 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-17 02:38:47.364696 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-17 02:38:47.364706 | orchestrator | 2026-02-17 02:38:47.364716 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-17 02:38:48.049152 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-17 02:38:48.049256 | orchestrator | changed: [testbed-manager] 2026-02-17 02:38:48.049273 | orchestrator | 2026-02-17 02:38:48.049287 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-17 02:38:48.736241 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-17 02:38:48.736418 | orchestrator | changed: [testbed-manager] 2026-02-17 02:38:48.736431 | orchestrator | 2026-02-17 02:38:48.736438 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-17 02:38:48.798872 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:38:48.798947 | orchestrator | 2026-02-17 02:38:48.798958 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-17 02:38:49.181431 | orchestrator | ok: [testbed-manager] 2026-02-17 02:38:49.181567 | orchestrator | 2026-02-17 02:38:49.181577 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-17 02:38:49.275434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-17 02:38:49.275536 | orchestrator | 2026-02-17 02:38:49.275552 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-17 02:38:50.450597 | orchestrator | changed: [testbed-manager] 2026-02-17 02:38:50.450681 | orchestrator | 2026-02-17 02:38:50.450690 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-17 02:38:51.374317 | orchestrator | changed: [testbed-manager] 2026-02-17 02:38:51.374444 | orchestrator | 2026-02-17 02:38:51.374466 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-17 02:39:09.397843 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:09.397939 | orchestrator | 2026-02-17 02:39:09.397952 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-17 02:39:09.458944 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:39:09.459044 | orchestrator | 2026-02-17 02:39:09.459078 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-17 02:39:09.459089 | orchestrator | 2026-02-17 02:39:09.459098 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:39:11.474958 | orchestrator | ok: [testbed-manager] 2026-02-17 02:39:11.475041 | orchestrator | 2026-02-17 02:39:11.475051 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-17 02:39:11.638246 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-17 02:39:11.638345 | orchestrator | 2026-02-17 02:39:11.638361 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-17 02:39:11.712695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 02:39:11.712856 | orchestrator | 2026-02-17 02:39:11.712875 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-17 02:39:14.501216 | orchestrator | ok: [testbed-manager] 2026-02-17 02:39:14.501321 | orchestrator | 2026-02-17 02:39:14.501334 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-17 02:39:14.567424 | orchestrator | ok: [testbed-manager] 2026-02-17 02:39:14.567529 | orchestrator | 2026-02-17 02:39:14.567546 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-17 02:39:14.718223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-17 02:39:14.718324 | orchestrator | 2026-02-17 02:39:14.718344 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-17 02:39:17.795339 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-17 02:39:17.795433 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-17 02:39:17.795444 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-17 02:39:17.795454 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-17 02:39:17.795462 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-17 02:39:17.795474 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-17 02:39:17.795487 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-17 02:39:17.795500 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-17 02:39:17.795520 | orchestrator | 2026-02-17 02:39:17.795537 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-17 02:39:18.538122 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:18.538218 | orchestrator | 2026-02-17 02:39:18.538233 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-17 02:39:19.290575 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:19.290651 | orchestrator | 2026-02-17 02:39:19.290661 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-17 02:39:19.364986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-17 02:39:19.365082 | orchestrator | 2026-02-17 02:39:19.365099 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-17 02:39:20.700700 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-17 02:39:20.700948 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-17 02:39:20.700979 | orchestrator | 2026-02-17 02:39:20.700999 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-17 02:39:21.377194 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:21.377318 | orchestrator | 2026-02-17 02:39:21.377341 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-17 02:39:21.441469 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:39:21.441539 | orchestrator | 2026-02-17 02:39:21.441546 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-17 02:39:21.529238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-17 02:39:21.529323 | orchestrator | 2026-02-17 02:39:21.529335 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-17 02:39:22.186280 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:22.186361 | orchestrator | 2026-02-17 02:39:22.186372 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-17 02:39:22.258201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-17 02:39:22.258305 | orchestrator | 2026-02-17 02:39:22.258322 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-17 02:39:23.717302 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-17 02:39:23.717408 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-17 02:39:23.717424 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:23.717437 | orchestrator | 2026-02-17 02:39:23.717450 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-17 02:39:24.465888 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:24.465989 | orchestrator | 2026-02-17 02:39:24.466006 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-17 02:39:24.533336 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:39:24.533411 | orchestrator | 2026-02-17 02:39:24.533420 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-17 02:39:24.668601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-17 02:39:24.668671 | orchestrator | 2026-02-17 02:39:24.668678 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-17 02:39:25.250479 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:25.250555 | orchestrator | 2026-02-17 02:39:25.250562 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-17 02:39:25.671084 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:25.671182 | orchestrator | 2026-02-17 02:39:25.671191 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-17 02:39:27.005997 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-17 02:39:27.006130 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-17 02:39:27.006137 | orchestrator | 2026-02-17 02:39:27.006142 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-17 02:39:27.718194 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:27.718295 | orchestrator | 2026-02-17 02:39:27.718318 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-17 02:39:28.138781 | orchestrator | ok: [testbed-manager] 2026-02-17 02:39:28.138917 | orchestrator | 2026-02-17 02:39:28.138936 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-17 02:39:28.521120 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:28.521192 | orchestrator | 2026-02-17 02:39:28.521201 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-17 02:39:28.576865 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:39:28.576937 | orchestrator | 2026-02-17 02:39:28.576944 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-17 02:39:28.660089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-17 02:39:28.660244 | orchestrator | 2026-02-17 02:39:28.660262 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-17 02:39:28.718392 | orchestrator | ok: [testbed-manager] 2026-02-17 02:39:28.718478 | orchestrator | 2026-02-17 02:39:28.718489 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-17 02:39:30.902107 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-17 02:39:30.902198 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-17 02:39:30.902209 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-17 02:39:30.902217 | orchestrator | 2026-02-17 02:39:30.902226 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-17 02:39:31.643132 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:31.643207 | orchestrator | 2026-02-17 02:39:31.643215 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-17 02:39:32.383060 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:32.383184 | orchestrator | 2026-02-17 02:39:32.383207 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-17 02:39:33.132792 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:33.132895 | orchestrator | 2026-02-17 02:39:33.132912 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-17 02:39:33.201126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-17 02:39:33.201222 | orchestrator | 2026-02-17 02:39:33.201240 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-17 02:39:33.258071 | orchestrator | ok: [testbed-manager] 2026-02-17 02:39:33.258193 | orchestrator | 2026-02-17 02:39:33.258211 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-17 02:39:34.043200 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-17 02:39:34.043287 | orchestrator | 2026-02-17 02:39:34.043300 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-17 02:39:34.128019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-17 02:39:34.128111 | orchestrator | 2026-02-17 02:39:34.128125 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-17 02:39:34.867341 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:34.867438 | orchestrator | 2026-02-17 02:39:34.867453 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-17 02:39:35.544839 | orchestrator | ok: [testbed-manager] 2026-02-17 02:39:35.544926 | orchestrator | 2026-02-17 02:39:35.544938 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-17 02:39:35.599411 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:39:35.599486 | orchestrator | 2026-02-17 02:39:35.599496 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-17 02:39:35.658963 | orchestrator | ok: [testbed-manager] 2026-02-17 02:39:35.659048 | orchestrator | 2026-02-17 02:39:35.659063 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-17 02:39:36.463562 | orchestrator | changed: [testbed-manager] 2026-02-17 02:39:36.463657 | orchestrator | 2026-02-17 02:39:36.463666 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-17 02:40:54.505571 | orchestrator | changed: [testbed-manager] 2026-02-17 02:40:54.505717 | orchestrator | 2026-02-17 02:40:54.505744 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-17 02:40:55.604692 | orchestrator | ok: [testbed-manager] 2026-02-17 02:40:55.604790 | orchestrator | 2026-02-17 02:40:55.604874 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-17 02:40:55.667990 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:40:55.668123 | orchestrator | 2026-02-17 02:40:55.668153 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-17 02:41:05.810711 | orchestrator | changed: [testbed-manager] 2026-02-17 02:41:05.810875 | orchestrator | 2026-02-17 02:41:05.810901 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-17 02:41:05.934790 | orchestrator | ok: [testbed-manager] 2026-02-17 02:41:05.935048 | orchestrator | 2026-02-17 02:41:05.935082 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-17 02:41:05.935102 | orchestrator | 2026-02-17 02:41:05.935113 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-17 02:41:05.996143 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:41:05.996238 | orchestrator | 2026-02-17 02:41:05.996255 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-17 02:42:06.064422 | orchestrator | Pausing for 60 seconds 2026-02-17 02:42:06.064596 | orchestrator | changed: [testbed-manager] 2026-02-17 02:42:06.064613 | orchestrator | 2026-02-17 02:42:06.064626 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-17 02:42:09.184285 | orchestrator | changed: [testbed-manager] 2026-02-17 02:42:09.184408 | orchestrator | 2026-02-17 02:42:09.184432 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-17 02:43:11.388958 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-17 02:43:11.389069 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-17 02:43:11.389103 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-17 02:43:11.389115 | orchestrator | changed: [testbed-manager] 2026-02-17 02:43:11.389126 | orchestrator | 2026-02-17 02:43:11.389137 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-17 02:43:23.409863 | orchestrator | changed: [testbed-manager] 2026-02-17 02:43:23.410062 | orchestrator | 2026-02-17 02:43:23.410075 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-17 02:43:23.488006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-17 02:43:23.488092 | orchestrator | 2026-02-17 02:43:23.488104 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-17 02:43:23.488124 | orchestrator | 2026-02-17 02:43:23.488131 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-17 02:43:23.542724 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:43:23.542823 | orchestrator | 2026-02-17 02:43:23.542845 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-17 02:43:23.629473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-17 02:43:23.629561 | orchestrator | 2026-02-17 02:43:23.629574 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-17 02:43:24.425216 | orchestrator | changed: [testbed-manager] 2026-02-17 02:43:24.425317 | orchestrator | 2026-02-17 02:43:24.425333 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-17 02:43:27.856735 | orchestrator | ok: [testbed-manager] 2026-02-17 02:43:27.856837 | orchestrator | 2026-02-17 02:43:27.856854 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-17 02:43:27.929595 | orchestrator | ok: [testbed-manager] => { 2026-02-17 02:43:27.929693 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-17 02:43:27.929718 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-17 02:43:27.929737 | orchestrator | "Checking running containers against expected versions...", 2026-02-17 02:43:27.929757 | orchestrator | "", 2026-02-17 02:43:27.929777 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-17 02:43:27.929794 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-17 02:43:27.929812 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.929831 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-17 02:43:27.929850 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.929870 | orchestrator | "", 2026-02-17 02:43:27.929953 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-17 02:43:27.930007 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-17 02:43:27.930100 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.930122 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-17 02:43:27.930142 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.930162 | orchestrator | "", 2026-02-17 02:43:27.930177 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-17 02:43:27.930189 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-17 02:43:27.930199 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.930210 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-17 02:43:27.930220 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.930238 | orchestrator | "", 2026-02-17 02:43:27.930255 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-17 02:43:27.930275 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-17 02:43:27.930293 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.930310 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-17 02:43:27.930328 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.930347 | orchestrator | "", 2026-02-17 02:43:27.930368 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-17 02:43:27.930388 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-17 02:43:27.930407 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.930425 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-17 02:43:27.930444 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.930464 | orchestrator | "", 2026-02-17 02:43:27.930481 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-17 02:43:27.930499 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.930513 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.930532 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.930550 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.930693 | orchestrator | "", 2026-02-17 02:43:27.930720 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-17 02:43:27.930739 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-17 02:43:27.930758 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.930777 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-17 02:43:27.930797 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.930815 | orchestrator | "", 2026-02-17 02:43:27.930833 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-17 02:43:27.930852 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-17 02:43:27.930871 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.930917 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-17 02:43:27.930935 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.930952 | orchestrator | "", 2026-02-17 02:43:27.930970 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-17 02:43:27.930989 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-17 02:43:27.931008 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.931026 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-17 02:43:27.931043 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.931060 | orchestrator | "", 2026-02-17 02:43:27.931079 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-17 02:43:27.931098 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-17 02:43:27.931117 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.931135 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-17 02:43:27.931155 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.931173 | orchestrator | "", 2026-02-17 02:43:27.931191 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-17 02:43:27.931228 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931248 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.931266 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931284 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.931303 | orchestrator | "", 2026-02-17 02:43:27.931321 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-17 02:43:27.931340 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931359 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.931379 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931397 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.931415 | orchestrator | "", 2026-02-17 02:43:27.931435 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-17 02:43:27.931454 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931472 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.931491 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931510 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.931529 | orchestrator | "", 2026-02-17 02:43:27.931548 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-17 02:43:27.931567 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931584 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.931604 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931649 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.931671 | orchestrator | "", 2026-02-17 02:43:27.931690 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-17 02:43:27.931708 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931739 | orchestrator | " Enabled: true", 2026-02-17 02:43:27.931760 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-17 02:43:27.931779 | orchestrator | " Status: ✅ MATCH", 2026-02-17 02:43:27.931798 | orchestrator | "", 2026-02-17 02:43:27.931816 | orchestrator | "=== Summary ===", 2026-02-17 02:43:27.931835 | orchestrator | "Errors (version mismatches): 0", 2026-02-17 02:43:27.931854 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-17 02:43:27.931874 | orchestrator | "", 2026-02-17 02:43:27.931929 | orchestrator | "✅ All running containers match expected versions!" 2026-02-17 02:43:27.931949 | orchestrator | ] 2026-02-17 02:43:27.931969 | orchestrator | } 2026-02-17 02:43:27.931988 | orchestrator | 2026-02-17 02:43:27.932008 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-17 02:43:27.991663 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:43:27.991759 | orchestrator | 2026-02-17 02:43:27.991775 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:43:27.991788 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-17 02:43:27.991800 | orchestrator | 2026-02-17 02:43:28.121043 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-17 02:43:28.121264 | orchestrator | + deactivate 2026-02-17 02:43:28.121289 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-17 02:43:28.121302 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 02:43:28.121312 | orchestrator | + export PATH 2026-02-17 02:43:28.121356 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-17 02:43:28.121367 | orchestrator | + '[' -n '' ']' 2026-02-17 02:43:28.121377 | orchestrator | + hash -r 2026-02-17 02:43:28.121387 | orchestrator | + '[' -n '' ']' 2026-02-17 02:43:28.121396 | orchestrator | + unset VIRTUAL_ENV 2026-02-17 02:43:28.121406 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-17 02:43:28.121416 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-17 02:43:28.121426 | orchestrator | + unset -f deactivate 2026-02-17 02:43:28.121437 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-17 02:43:28.130677 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-17 02:43:28.130762 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-17 02:43:28.130795 | orchestrator | + local max_attempts=60 2026-02-17 02:43:28.130802 | orchestrator | + local name=ceph-ansible 2026-02-17 02:43:28.130809 | orchestrator | + local attempt_num=1 2026-02-17 02:43:28.131199 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 02:43:28.163098 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-17 02:43:28.163192 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-17 02:43:28.163207 | orchestrator | + local max_attempts=60 2026-02-17 02:43:28.163215 | orchestrator | + local name=kolla-ansible 2026-02-17 02:43:28.163221 | orchestrator | + local attempt_num=1 2026-02-17 02:43:28.163778 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-17 02:43:28.198974 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-17 02:43:28.199060 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-17 02:43:28.199074 | orchestrator | + local max_attempts=60 2026-02-17 02:43:28.199081 | orchestrator | + local name=osism-ansible 2026-02-17 02:43:28.199088 | orchestrator | + local attempt_num=1 2026-02-17 02:43:28.199446 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-17 02:43:28.235527 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-17 02:43:28.235610 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-17 02:43:28.235620 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-17 02:43:29.002694 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-17 02:43:29.212146 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-17 02:43:29.212275 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-17 02:43:29.212295 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-17 02:43:29.212308 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-17 02:43:29.212326 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-17 02:43:29.212371 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-17 02:43:29.212392 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-17 02:43:29.212409 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-17 02:43:29.212428 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-17 02:43:29.212462 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-17 02:43:29.212481 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-17 02:43:29.212499 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-17 02:43:29.212516 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-17 02:43:29.212566 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-17 02:43:29.212588 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-17 02:43:29.212608 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-17 02:43:29.219205 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-17 02:43:29.271641 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 02:43:29.271742 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-17 02:43:29.277391 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-17 02:43:41.736329 | orchestrator | 2026-02-17 02:43:41 | INFO  | Task 9f93647e-8c4f-41c7-92a5-fc3aa8390821 (resolvconf) was prepared for execution. 2026-02-17 02:43:41.736408 | orchestrator | 2026-02-17 02:43:41 | INFO  | It takes a moment until task 9f93647e-8c4f-41c7-92a5-fc3aa8390821 (resolvconf) has been started and output is visible here. 2026-02-17 02:43:56.873591 | orchestrator | 2026-02-17 02:43:56.873698 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-17 02:43:56.873712 | orchestrator | 2026-02-17 02:43:56.873722 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:43:56.873732 | orchestrator | Tuesday 17 February 2026 02:43:46 +0000 (0:00:00.151) 0:00:00.151 ****** 2026-02-17 02:43:56.873741 | orchestrator | ok: [testbed-manager] 2026-02-17 02:43:56.873751 | orchestrator | 2026-02-17 02:43:56.873760 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-17 02:43:56.873770 | orchestrator | Tuesday 17 February 2026 02:43:50 +0000 (0:00:04.093) 0:00:04.244 ****** 2026-02-17 02:43:56.873779 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:43:56.873792 | orchestrator | 2026-02-17 02:43:56.873803 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-17 02:43:56.873814 | orchestrator | Tuesday 17 February 2026 02:43:50 +0000 (0:00:00.066) 0:00:04.311 ****** 2026-02-17 02:43:56.873825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-17 02:43:56.873837 | orchestrator | 2026-02-17 02:43:56.873848 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-17 02:43:56.873858 | orchestrator | Tuesday 17 February 2026 02:43:50 +0000 (0:00:00.090) 0:00:04.401 ****** 2026-02-17 02:43:56.873890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 02:43:56.873972 | orchestrator | 2026-02-17 02:43:56.873992 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-17 02:43:56.874006 | orchestrator | Tuesday 17 February 2026 02:43:50 +0000 (0:00:00.091) 0:00:04.493 ****** 2026-02-17 02:43:56.874077 | orchestrator | ok: [testbed-manager] 2026-02-17 02:43:56.874089 | orchestrator | 2026-02-17 02:43:56.874100 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-17 02:43:56.874111 | orchestrator | Tuesday 17 February 2026 02:43:51 +0000 (0:00:01.244) 0:00:05.737 ****** 2026-02-17 02:43:56.874122 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:43:56.874135 | orchestrator | 2026-02-17 02:43:56.874148 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-17 02:43:56.874166 | orchestrator | Tuesday 17 February 2026 02:43:51 +0000 (0:00:00.071) 0:00:05.809 ****** 2026-02-17 02:43:56.874262 | orchestrator | ok: [testbed-manager] 2026-02-17 02:43:56.874290 | orchestrator | 2026-02-17 02:43:56.874311 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-17 02:43:56.874332 | orchestrator | Tuesday 17 February 2026 02:43:52 +0000 (0:00:00.551) 0:00:06.360 ****** 2026-02-17 02:43:56.874351 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:43:56.874370 | orchestrator | 2026-02-17 02:43:56.874389 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-17 02:43:56.874407 | orchestrator | Tuesday 17 February 2026 02:43:52 +0000 (0:00:00.076) 0:00:06.437 ****** 2026-02-17 02:43:56.874423 | orchestrator | changed: [testbed-manager] 2026-02-17 02:43:56.874443 | orchestrator | 2026-02-17 02:43:56.874463 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-17 02:43:56.874482 | orchestrator | Tuesday 17 February 2026 02:43:52 +0000 (0:00:00.568) 0:00:07.006 ****** 2026-02-17 02:43:56.874501 | orchestrator | changed: [testbed-manager] 2026-02-17 02:43:56.874520 | orchestrator | 2026-02-17 02:43:56.874533 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-17 02:43:56.874544 | orchestrator | Tuesday 17 February 2026 02:43:54 +0000 (0:00:01.216) 0:00:08.223 ****** 2026-02-17 02:43:56.874555 | orchestrator | ok: [testbed-manager] 2026-02-17 02:43:56.874567 | orchestrator | 2026-02-17 02:43:56.874578 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-17 02:43:56.874588 | orchestrator | Tuesday 17 February 2026 02:43:55 +0000 (0:00:01.076) 0:00:09.300 ****** 2026-02-17 02:43:56.874599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-17 02:43:56.874610 | orchestrator | 2026-02-17 02:43:56.874621 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-17 02:43:56.874632 | orchestrator | Tuesday 17 February 2026 02:43:55 +0000 (0:00:00.090) 0:00:09.391 ****** 2026-02-17 02:43:56.874642 | orchestrator | changed: [testbed-manager] 2026-02-17 02:43:56.874653 | orchestrator | 2026-02-17 02:43:56.874664 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:43:56.874675 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 02:43:56.874686 | orchestrator | 2026-02-17 02:43:56.874698 | orchestrator | 2026-02-17 02:43:56.874721 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:43:56.874748 | orchestrator | Tuesday 17 February 2026 02:43:56 +0000 (0:00:01.219) 0:00:10.611 ****** 2026-02-17 02:43:56.874765 | orchestrator | =============================================================================== 2026-02-17 02:43:56.874783 | orchestrator | Gathering Facts --------------------------------------------------------- 4.09s 2026-02-17 02:43:56.874799 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.24s 2026-02-17 02:43:56.874815 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.22s 2026-02-17 02:43:56.874830 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.22s 2026-02-17 02:43:56.874847 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.08s 2026-02-17 02:43:56.874863 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-02-17 02:43:56.874934 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-02-17 02:43:56.874956 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-17 02:43:56.874975 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-02-17 02:43:56.874996 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-17 02:43:56.875015 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-17 02:43:56.875035 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-17 02:43:56.875074 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-17 02:43:57.267206 | orchestrator | + osism apply sshconfig 2026-02-17 02:44:09.437194 | orchestrator | 2026-02-17 02:44:09 | INFO  | Task 6fd70f23-ccd6-4159-980b-aca678f210f8 (sshconfig) was prepared for execution. 2026-02-17 02:44:09.437333 | orchestrator | 2026-02-17 02:44:09 | INFO  | It takes a moment until task 6fd70f23-ccd6-4159-980b-aca678f210f8 (sshconfig) has been started and output is visible here. 2026-02-17 02:44:22.331688 | orchestrator | 2026-02-17 02:44:22.331806 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-17 02:44:22.331828 | orchestrator | 2026-02-17 02:44:22.331859 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-17 02:44:22.331871 | orchestrator | Tuesday 17 February 2026 02:44:14 +0000 (0:00:00.182) 0:00:00.182 ****** 2026-02-17 02:44:22.331890 | orchestrator | ok: [testbed-manager] 2026-02-17 02:44:22.331905 | orchestrator | 2026-02-17 02:44:22.331979 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-17 02:44:22.331990 | orchestrator | Tuesday 17 February 2026 02:44:14 +0000 (0:00:00.621) 0:00:00.804 ****** 2026-02-17 02:44:22.332001 | orchestrator | changed: [testbed-manager] 2026-02-17 02:44:22.332013 | orchestrator | 2026-02-17 02:44:22.332025 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-17 02:44:22.332038 | orchestrator | Tuesday 17 February 2026 02:44:15 +0000 (0:00:00.552) 0:00:01.357 ****** 2026-02-17 02:44:22.332049 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-17 02:44:22.332063 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-17 02:44:22.332076 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-17 02:44:22.332089 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-17 02:44:22.332100 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-17 02:44:22.332108 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-17 02:44:22.332115 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-17 02:44:22.332122 | orchestrator | 2026-02-17 02:44:22.332130 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-17 02:44:22.332137 | orchestrator | Tuesday 17 February 2026 02:44:21 +0000 (0:00:06.178) 0:00:07.535 ****** 2026-02-17 02:44:22.332145 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:44:22.332152 | orchestrator | 2026-02-17 02:44:22.332159 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-17 02:44:22.332167 | orchestrator | Tuesday 17 February 2026 02:44:21 +0000 (0:00:00.087) 0:00:07.622 ****** 2026-02-17 02:44:22.332174 | orchestrator | changed: [testbed-manager] 2026-02-17 02:44:22.332181 | orchestrator | 2026-02-17 02:44:22.332188 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:44:22.332197 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 02:44:22.332205 | orchestrator | 2026-02-17 02:44:22.332212 | orchestrator | 2026-02-17 02:44:22.332219 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:44:22.332226 | orchestrator | Tuesday 17 February 2026 02:44:22 +0000 (0:00:00.605) 0:00:08.228 ****** 2026-02-17 02:44:22.332234 | orchestrator | =============================================================================== 2026-02-17 02:44:22.332241 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.18s 2026-02-17 02:44:22.332252 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.62s 2026-02-17 02:44:22.332269 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2026-02-17 02:44:22.332283 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-02-17 02:44:22.332350 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-02-17 02:44:22.692642 | orchestrator | + osism apply known-hosts 2026-02-17 02:44:34.839015 | orchestrator | 2026-02-17 02:44:34 | INFO  | Task ed7b40ae-9df4-4668-a723-1df5979eef3b (known-hosts) was prepared for execution. 2026-02-17 02:44:34.839097 | orchestrator | 2026-02-17 02:44:34 | INFO  | It takes a moment until task ed7b40ae-9df4-4668-a723-1df5979eef3b (known-hosts) has been started and output is visible here. 2026-02-17 02:44:52.573445 | orchestrator | 2026-02-17 02:44:52.573552 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-17 02:44:52.573567 | orchestrator | 2026-02-17 02:44:52.573578 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-17 02:44:52.573589 | orchestrator | Tuesday 17 February 2026 02:44:39 +0000 (0:00:00.169) 0:00:00.169 ****** 2026-02-17 02:44:52.573600 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-17 02:44:52.573610 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-17 02:44:52.573620 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-17 02:44:52.573630 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-17 02:44:52.573640 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-17 02:44:52.573649 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-17 02:44:52.573658 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-17 02:44:52.573668 | orchestrator | 2026-02-17 02:44:52.573678 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-17 02:44:52.573688 | orchestrator | Tuesday 17 February 2026 02:44:45 +0000 (0:00:06.132) 0:00:06.301 ****** 2026-02-17 02:44:52.573700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-17 02:44:52.573712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-17 02:44:52.573722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-17 02:44:52.573731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-17 02:44:52.573741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-17 02:44:52.573760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-17 02:44:52.573771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-17 02:44:52.573781 | orchestrator | 2026-02-17 02:44:52.573791 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:44:52.573801 | orchestrator | Tuesday 17 February 2026 02:44:45 +0000 (0:00:00.171) 0:00:06.473 ****** 2026-02-17 02:44:52.573811 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSYrBXGPvAkiCRtwNoVEx7ZTvMJV9T+J5x8a0OWGbihGC6yJ4sN9tpbzYveWtkMWIpIAJuQpIhbfYlHFUkH9J0=) 2026-02-17 02:44:52.573822 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdI7SiSyvzkSwRvLqgNJT1/8lnUR3nE0pC4yjfQfLGb) 2026-02-17 02:44:52.573840 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpO3g7bnNwd8nP7NgauosRjnGIFIDxIiN+lz+AZdDjSqaxo/KDAVBGxzAc/bD1fdJePylFY8H8m9Jf8dJQdybKeg6SfUyLNAfU+rVtouYAXfehQggBZf3wdk+0NvHgEoHe+QNaMZ3KZlaFuf53IxIOLesBA14vRbl2/U9WRLHT3KwZDqDyA+tZhL5t3syKinfPvgRX19xZ06NBUh1UkYU5GSXTX2ra8MV5QJFwWWjXtojK2/1G2IXV0WA2o7in2Qqlocjl6BBY/twgGvoY3n99X/vuWle16ku2h/aiPGJb3UqK/xUGq07i/H1kCpOstVrUBWIyPhqBCZv+7KMA3mfenFDxBEzREZd+J0pChQVdawcJv7KDPxsk60//OkVHu1kDJBD/jdGhWVhGmGwrmSqiH2CK4UJo6LfK06pD+kL7a0yeGX9efHstMcJTvBxTRIRQ2y2aKArL6CAHCDwVDHVkskBFr8cD1jcvlPObxA1jP4It+T4cPqVb34uwZoDHqP8=) 2026-02-17 02:44:52.573879 | orchestrator | 2026-02-17 02:44:52.573894 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:44:52.573908 | orchestrator | Tuesday 17 February 2026 02:44:46 +0000 (0:00:01.259) 0:00:07.733 ****** 2026-02-17 02:44:52.574082 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH8v2ynFjwRe/yaUWbbWQJtBQuL58KxghLftDnrdn4K1L4DJbrhmVbYid8FvCGKz3VeODv3memP+Zewyqij7FoVfbJj8Ku6Io38VCaabk0q+nVkmw/AL4DjtQYJ8gWVWvHQDmbDWMqmVYeSRkPAOs6AxrtCCis5IUeHEUbFb2Rc93rTenhyETwL9Za5r1eN8IScHqPV587RGwl2dPh3d+zA92QGopAUYxIsIGpZvjItv7MtnKPEhoPSa90L9kOOf0g23y/1g5JCdCXe/TiD/c50Q1hiaW0zlf4uv5W7KBK8NOhWLTIWZ5xHPQlSXREAhrp+bgTw5bGYDMVJQAhcUWvDftm0bY9PnJNgDc9PKNk7ms+x2m72TC6GeR+VKaN0x9Ntn+0zl8Xr47id9NvHdtY7VmxyE3VkAzl42BA7Tse4MzrmAeuLS4oeNpJsLKJ4LegjLB3c9vpkeif7GqqLBbDOU6q1JPx/805M877Ry1m6nH8bV0XZGnxgh+tXqHo1ks=) 2026-02-17 02:44:52.574115 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICL5/ySual95L9Lkhz3K6P23NlMmmRy+vTcXGwcbDnqH) 2026-02-17 02:44:52.574131 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAqSykAbuW9b35PRNs5RTt99Uc9MyOwNNVE5vzjQ/i5CZZKbVdioDVI28/pv+TKa2BpYW5IN2Cer59Bat7lRsX4=) 2026-02-17 02:44:52.574148 | orchestrator | 2026-02-17 02:44:52.574164 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:44:52.574182 | orchestrator | Tuesday 17 February 2026 02:44:48 +0000 (0:00:01.178) 0:00:08.911 ****** 2026-02-17 02:44:52.574201 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBpUrBfcJdlmudFlcVgXON0RpfCz+hW9+Xf8ZJJxkIMp) 2026-02-17 02:44:52.574219 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOPbaWQ/IiBDOs0lxTPUAzI2wjAP99G8uMaZ6ucR2im1zhzcc7AFVNIsNNunpoJHP7tWCiQDZuAw1pck4Lg6wqcn9s+fYuiVRK9BuznLLavU2U7fAuflM3a26sp3dNn6s1Xh0nb7lZwbeWFOZnEv8voPxKmt506pkWpLmKYWBvPL1DTa6RjL6baQkc+t1CwQ5BT+9JpyryF1BPf9Ssr9Iy7+D2znIM6EwhVCRlxEBOD2k+2ioGlPdLOAwPpWl7NezA9RQ/3UIADDCBFntfU28N81RA1yaVQywvVwctZ+Tf2s4R8ks5DJp8e9INH+iQW0iWZArM4Gdi5I0R5LCGRXYED8APloNtiUnaGc72LxkwoXbkPVRaOqSGaZCFjgZObZKvu/PH9kZyhE7+FKWkpm7Gp66CuoDI2/zWseIaWN7oIU08zOGkIAxC9FvdD1smqPqMG8yJi/k7nBWOX8I78wK0mB7MQnc9tuOD7bbmbDh8X09tfGEZbLTVPmUEaNbm4mE=) 2026-02-17 02:44:52.574237 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJFBFoYiaZsJ246aniyjv/bZmMylSv5WhqUx+A1VYGAjA8UBEfhgLXmvJ/Qkvt1Q7FpkIwXD4w9cV9HDLvud50Q=) 2026-02-17 02:44:52.574254 | orchestrator | 2026-02-17 02:44:52.574271 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:44:52.574288 | orchestrator | Tuesday 17 February 2026 02:44:49 +0000 (0:00:01.105) 0:00:10.017 ****** 2026-02-17 02:44:52.574305 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL2z0uyski8491TV3/vq5R2GRCQ5DZRNBAAB5eCP57xr) 2026-02-17 02:44:52.574323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYhzQVoBkQdkGNpLK05Yx24dseXva8OXz0tBW88Y0BXJ09jQEjXyC9BqRWXFUpCl47Tpykuz87AOTI/H5PgPV+H983oQNrIe9dyGA4ta7N8mzH6ShYfXZ6kEN9jwfhUI+Fg3iYbjokxAzE3oSun/b6od/rumXOmM8+nJ96Dode9y3HFVHWzz919TNl6EzVQF1a9uTnzAifXmsyjzO1NqqlrOctqqBeAsBIgePHZR5154SY4j6RumpKvALnl6efu08wC5g+0g4ai0NI7HKggbDq42aUT0dwVupBcRJ31IHB9AohylqN/KTfpuUj8xm7nK2hArG6MS1wV7iynOqdztGUr3uC3Yy1+PCpIccmugZX9i4biBjtEfyAGyDBmBFpj1x0GNaJRe4/QMHc7SNsE0aErpyOP/37s0Q2poqv/D6PVPQoNWlt1ImLU2oq2ZwyOrTrwgubympBFQFM5MAxpLhWkqAgUU3SOis3TufTwxeXan35GXuBWOWoa97n6+2L+6E=) 2026-02-17 02:44:52.574357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoyPbSogzdXoxNo0o4QT21GAV+9hGJDNwvG3S3dbsx5J2aQQO6r2l0A8y4v98G4YD6Gtgq0/2pUpBLSJkvbHq0=) 2026-02-17 02:44:52.574374 | orchestrator | 2026-02-17 02:44:52.574391 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:44:52.574407 | orchestrator | Tuesday 17 February 2026 02:44:50 +0000 (0:00:01.116) 0:00:11.133 ****** 2026-02-17 02:44:52.574514 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPaZT1EZaY0HXwQgqYNkIshuuoYxsycHYkxOvBIVocZAkwTkzauGKYLv+BL4i2wrSIWy+aniLaDDqeq2qUs0D5TWyy8PlbjwbB4bZlRhb+hDn1Xsz1CU3fdoXMjtSCFUt/buz6s0pNNbFpbEBLVX5M7jgBqFR7U6giPJTSTJtCkxihuH4f1kRlOLw9k22zRLdwEJp3NN2Lcz0CO2l2eLgQ589nmi+/BTIIg9VsHbfz6JvlbFT3OgPoFhOi9qOPc8I7juPCdIvpSBxB0AbAnsBS8vVbysX6myQBUrsD7Xt0tk57g2TtsXjDVKZ/qeO87eYkffY8VObS5P5VgZ4Zz6mHy0TyfsAtsvsYbHoqvPcEHDJuM8LT5XscfQ6JjHethjk9goxUcUZ9oDnIjq9pESn5pk/zwnzIx33iV2Xbxu9NfHC7Hhp+KYQDIULwIIzy7Ulm9rnT3pzdjIEgDOuPbE2fDoSm+oK6qO0J0+VJp88WosrsoGX7rq4Ai3bbKZ33Av0=) 2026-02-17 02:44:52.574535 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDrckhtVbPpD5sDv4kvUyeNmfD0YB+q0gvPxWKrPgC15tEr+zHlHHntuh/x7rXWH2hJTlfSMK0AeW8yQ/ew7WwU=) 2026-02-17 02:44:52.574552 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMNnN2mrchWN/AdBrsaCpKKgyciL9s5ksyMY6on6OTBg) 2026-02-17 02:44:52.574570 | orchestrator | 2026-02-17 02:44:52.574587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:44:52.574603 | orchestrator | Tuesday 17 February 2026 02:44:51 +0000 (0:00:01.116) 0:00:12.250 ****** 2026-02-17 02:44:52.574638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINbFRbfjwBDnu71RRwv5H6hfKlk2apYfxpYM4lUG9S3L) 2026-02-17 02:45:04.233684 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5Hl19onUesiIRyVerFVxZB502JBieBeeF0SrTF93z2msm1RRSpA3k0WYm+eTwPIJYAOICc/z8zjnoJB2kRXdBrusfkzh+SbY/9cLyakPU4h0wSiWMY0OaGRiLrxe0C9Cy6vikUrv800ZpqevpP0kaTy7fOZaOLUyZVplX20t/DtAJIlsiQyUN4Zxq3AqnkI+Kmu37gCfcfHA5p/jHmb66YbghaCP2kpihsoIaPZb8858dicTSohjdTJdEy0I6E2XrNrEo71sJXfZO+aCBXs/hyZg5JHUNTAxVbUACEaRmV8ouXOcJFCxTBx5D6jmrzt0HLU+0kWGxgxYuRkvldRQP7eFc0BeDcARujblOn4qBZX8mWaXvpa7mxJWBVpvtclihCWbYZuywKyrkHeWF5W7ffltgzXKMg8EEzGWjbEJXf9nBX/XKldb6YRs1EVNiLSyzqxapUaoRe1iodOsVH7UBCmvX2S7lWMi0WpymQJDfhL7d/PGe8RXpECJSOdOc9xc=) 2026-02-17 02:45:04.233788 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG3rPfxBexviRb797rFVnMVpVNI/HWx9fkbErypiiAtbBPheOHtEnpWDINw35kw3HnaJiASoZTWZIHaof8aUcmM=) 2026-02-17 02:45:04.233802 | orchestrator | 2026-02-17 02:45:04.233811 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:45:04.233821 | orchestrator | Tuesday 17 February 2026 02:44:52 +0000 (0:00:01.132) 0:00:13.383 ****** 2026-02-17 02:45:04.233829 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8wcKjcUQi8sDPWk84m7hIpfaeaXK3+FEerHHjgOzZ+GPqJOTFXS0dbDhFsCm5c/OiqtXA0SuJr4VtdRij3lCOr8WVZmRhv+aMXx2dQhCZ4hkfPi0dqRr2o37BthbEXPQE3QUt9lNQwH4g/hlljdPU15XdVMSJZQTh+QJgsKJu9mHdnS8j4cPi7HbjhoCcTxqbg5j2fVw9cPwf8fUU38d+WIvEYO7OqWxvjoZFql+4DYdTznGAxu9+aLmNSKKNPA+ed02+dsqjdf5KTAjJhR0X8NO7BndpNcL11Yh4gLfHLNqiqdWU5CMcvpmMINIDRh+mzZT5wy5SQ+lpWaNAmz7FWWsD5LNPcVgqkH+vKjSUMuHHVTf3KlZQrVN+aK2Km2JQKMEZKR4TtBZ9LypkIjeUJjzyPBPhRL324WZgMXtiuTlnkbXkOtlhIgS+PASUiXCfbImwIguOf6suJU8h+UpYsxBikQiMsomGu0zcGJ9OmWCoqw6XdPzG7Bi5Z7iGQ10=) 2026-02-17 02:45:04.233837 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKdjyvBHEyefs62vr5jm5eFWt/BMssjQCWPyU5p1xhexaBuS++XmrXkn2+KXElEdu1QHKC7eqPmwKhyeMEzF5EQ=) 2026-02-17 02:45:04.233860 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC122YIR1Bz+OnpG8CrAc4BfG1pFduAZ2pGZeYTW2lRO) 2026-02-17 02:45:04.233870 | orchestrator | 2026-02-17 02:45:04.233878 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-17 02:45:04.233887 | orchestrator | Tuesday 17 February 2026 02:44:53 +0000 (0:00:01.155) 0:00:14.539 ****** 2026-02-17 02:45:04.233896 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-17 02:45:04.233904 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-17 02:45:04.233912 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-17 02:45:04.233919 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-17 02:45:04.233926 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-17 02:45:04.233987 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-17 02:45:04.233994 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-17 02:45:04.234001 | orchestrator | 2026-02-17 02:45:04.234009 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-17 02:45:04.234068 | orchestrator | Tuesday 17 February 2026 02:44:59 +0000 (0:00:05.629) 0:00:20.168 ****** 2026-02-17 02:45:04.234078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-17 02:45:04.234088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-17 02:45:04.234097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-17 02:45:04.234105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-17 02:45:04.234113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-17 02:45:04.234121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-17 02:45:04.234129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-17 02:45:04.234137 | orchestrator | 2026-02-17 02:45:04.234157 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:45:04.234165 | orchestrator | Tuesday 17 February 2026 02:44:59 +0000 (0:00:00.172) 0:00:20.341 ****** 2026-02-17 02:45:04.234173 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdI7SiSyvzkSwRvLqgNJT1/8lnUR3nE0pC4yjfQfLGb) 2026-02-17 02:45:04.234182 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpO3g7bnNwd8nP7NgauosRjnGIFIDxIiN+lz+AZdDjSqaxo/KDAVBGxzAc/bD1fdJePylFY8H8m9Jf8dJQdybKeg6SfUyLNAfU+rVtouYAXfehQggBZf3wdk+0NvHgEoHe+QNaMZ3KZlaFuf53IxIOLesBA14vRbl2/U9WRLHT3KwZDqDyA+tZhL5t3syKinfPvgRX19xZ06NBUh1UkYU5GSXTX2ra8MV5QJFwWWjXtojK2/1G2IXV0WA2o7in2Qqlocjl6BBY/twgGvoY3n99X/vuWle16ku2h/aiPGJb3UqK/xUGq07i/H1kCpOstVrUBWIyPhqBCZv+7KMA3mfenFDxBEzREZd+J0pChQVdawcJv7KDPxsk60//OkVHu1kDJBD/jdGhWVhGmGwrmSqiH2CK4UJo6LfK06pD+kL7a0yeGX9efHstMcJTvBxTRIRQ2y2aKArL6CAHCDwVDHVkskBFr8cD1jcvlPObxA1jP4It+T4cPqVb34uwZoDHqP8=) 2026-02-17 02:45:04.234197 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSYrBXGPvAkiCRtwNoVEx7ZTvMJV9T+J5x8a0OWGbihGC6yJ4sN9tpbzYveWtkMWIpIAJuQpIhbfYlHFUkH9J0=) 2026-02-17 02:45:04.234212 | orchestrator | 2026-02-17 02:45:04.234221 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:45:04.234229 | orchestrator | Tuesday 17 February 2026 02:45:00 +0000 (0:00:01.208) 0:00:21.549 ****** 2026-02-17 02:45:04.234238 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH8v2ynFjwRe/yaUWbbWQJtBQuL58KxghLftDnrdn4K1L4DJbrhmVbYid8FvCGKz3VeODv3memP+Zewyqij7FoVfbJj8Ku6Io38VCaabk0q+nVkmw/AL4DjtQYJ8gWVWvHQDmbDWMqmVYeSRkPAOs6AxrtCCis5IUeHEUbFb2Rc93rTenhyETwL9Za5r1eN8IScHqPV587RGwl2dPh3d+zA92QGopAUYxIsIGpZvjItv7MtnKPEhoPSa90L9kOOf0g23y/1g5JCdCXe/TiD/c50Q1hiaW0zlf4uv5W7KBK8NOhWLTIWZ5xHPQlSXREAhrp+bgTw5bGYDMVJQAhcUWvDftm0bY9PnJNgDc9PKNk7ms+x2m72TC6GeR+VKaN0x9Ntn+0zl8Xr47id9NvHdtY7VmxyE3VkAzl42BA7Tse4MzrmAeuLS4oeNpJsLKJ4LegjLB3c9vpkeif7GqqLBbDOU6q1JPx/805M877Ry1m6nH8bV0XZGnxgh+tXqHo1ks=) 2026-02-17 02:45:04.234247 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAqSykAbuW9b35PRNs5RTt99Uc9MyOwNNVE5vzjQ/i5CZZKbVdioDVI28/pv+TKa2BpYW5IN2Cer59Bat7lRsX4=) 2026-02-17 02:45:04.234256 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICL5/ySual95L9Lkhz3K6P23NlMmmRy+vTcXGwcbDnqH) 2026-02-17 02:45:04.234264 | orchestrator | 2026-02-17 02:45:04.234272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:45:04.234281 | orchestrator | Tuesday 17 February 2026 02:45:01 +0000 (0:00:01.177) 0:00:22.726 ****** 2026-02-17 02:45:04.234290 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJFBFoYiaZsJ246aniyjv/bZmMylSv5WhqUx+A1VYGAjA8UBEfhgLXmvJ/Qkvt1Q7FpkIwXD4w9cV9HDLvud50Q=) 2026-02-17 02:45:04.234299 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOPbaWQ/IiBDOs0lxTPUAzI2wjAP99G8uMaZ6ucR2im1zhzcc7AFVNIsNNunpoJHP7tWCiQDZuAw1pck4Lg6wqcn9s+fYuiVRK9BuznLLavU2U7fAuflM3a26sp3dNn6s1Xh0nb7lZwbeWFOZnEv8voPxKmt506pkWpLmKYWBvPL1DTa6RjL6baQkc+t1CwQ5BT+9JpyryF1BPf9Ssr9Iy7+D2znIM6EwhVCRlxEBOD2k+2ioGlPdLOAwPpWl7NezA9RQ/3UIADDCBFntfU28N81RA1yaVQywvVwctZ+Tf2s4R8ks5DJp8e9INH+iQW0iWZArM4Gdi5I0R5LCGRXYED8APloNtiUnaGc72LxkwoXbkPVRaOqSGaZCFjgZObZKvu/PH9kZyhE7+FKWkpm7Gp66CuoDI2/zWseIaWN7oIU08zOGkIAxC9FvdD1smqPqMG8yJi/k7nBWOX8I78wK0mB7MQnc9tuOD7bbmbDh8X09tfGEZbLTVPmUEaNbm4mE=) 2026-02-17 02:45:04.234307 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBpUrBfcJdlmudFlcVgXON0RpfCz+hW9+Xf8ZJJxkIMp) 2026-02-17 02:45:04.234315 | orchestrator | 2026-02-17 02:45:04.234324 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:45:04.234332 | orchestrator | Tuesday 17 February 2026 02:45:03 +0000 (0:00:01.133) 0:00:23.859 ****** 2026-02-17 02:45:04.234348 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYhzQVoBkQdkGNpLK05Yx24dseXva8OXz0tBW88Y0BXJ09jQEjXyC9BqRWXFUpCl47Tpykuz87AOTI/H5PgPV+H983oQNrIe9dyGA4ta7N8mzH6ShYfXZ6kEN9jwfhUI+Fg3iYbjokxAzE3oSun/b6od/rumXOmM8+nJ96Dode9y3HFVHWzz919TNl6EzVQF1a9uTnzAifXmsyjzO1NqqlrOctqqBeAsBIgePHZR5154SY4j6RumpKvALnl6efu08wC5g+0g4ai0NI7HKggbDq42aUT0dwVupBcRJ31IHB9AohylqN/KTfpuUj8xm7nK2hArG6MS1wV7iynOqdztGUr3uC3Yy1+PCpIccmugZX9i4biBjtEfyAGyDBmBFpj1x0GNaJRe4/QMHc7SNsE0aErpyOP/37s0Q2poqv/D6PVPQoNWlt1ImLU2oq2ZwyOrTrwgubympBFQFM5MAxpLhWkqAgUU3SOis3TufTwxeXan35GXuBWOWoa97n6+2L+6E=) 2026-02-17 02:45:09.148650 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoyPbSogzdXoxNo0o4QT21GAV+9hGJDNwvG3S3dbsx5J2aQQO6r2l0A8y4v98G4YD6Gtgq0/2pUpBLSJkvbHq0=) 2026-02-17 02:45:09.148769 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL2z0uyski8491TV3/vq5R2GRCQ5DZRNBAAB5eCP57xr) 2026-02-17 02:45:09.148823 | orchestrator | 2026-02-17 02:45:09.148841 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:45:09.148858 | orchestrator | Tuesday 17 February 2026 02:45:04 +0000 (0:00:01.182) 0:00:25.042 ****** 2026-02-17 02:45:09.148873 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDrckhtVbPpD5sDv4kvUyeNmfD0YB+q0gvPxWKrPgC15tEr+zHlHHntuh/x7rXWH2hJTlfSMK0AeW8yQ/ew7WwU=) 2026-02-17 02:45:09.148891 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPaZT1EZaY0HXwQgqYNkIshuuoYxsycHYkxOvBIVocZAkwTkzauGKYLv+BL4i2wrSIWy+aniLaDDqeq2qUs0D5TWyy8PlbjwbB4bZlRhb+hDn1Xsz1CU3fdoXMjtSCFUt/buz6s0pNNbFpbEBLVX5M7jgBqFR7U6giPJTSTJtCkxihuH4f1kRlOLw9k22zRLdwEJp3NN2Lcz0CO2l2eLgQ589nmi+/BTIIg9VsHbfz6JvlbFT3OgPoFhOi9qOPc8I7juPCdIvpSBxB0AbAnsBS8vVbysX6myQBUrsD7Xt0tk57g2TtsXjDVKZ/qeO87eYkffY8VObS5P5VgZ4Zz6mHy0TyfsAtsvsYbHoqvPcEHDJuM8LT5XscfQ6JjHethjk9goxUcUZ9oDnIjq9pESn5pk/zwnzIx33iV2Xbxu9NfHC7Hhp+KYQDIULwIIzy7Ulm9rnT3pzdjIEgDOuPbE2fDoSm+oK6qO0J0+VJp88WosrsoGX7rq4Ai3bbKZ33Av0=) 2026-02-17 02:45:09.148909 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMNnN2mrchWN/AdBrsaCpKKgyciL9s5ksyMY6on6OTBg) 2026-02-17 02:45:09.148925 | orchestrator | 2026-02-17 02:45:09.148969 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:45:09.148984 | orchestrator | Tuesday 17 February 2026 02:45:05 +0000 (0:00:01.152) 0:00:26.194 ****** 2026-02-17 02:45:09.148999 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5Hl19onUesiIRyVerFVxZB502JBieBeeF0SrTF93z2msm1RRSpA3k0WYm+eTwPIJYAOICc/z8zjnoJB2kRXdBrusfkzh+SbY/9cLyakPU4h0wSiWMY0OaGRiLrxe0C9Cy6vikUrv800ZpqevpP0kaTy7fOZaOLUyZVplX20t/DtAJIlsiQyUN4Zxq3AqnkI+Kmu37gCfcfHA5p/jHmb66YbghaCP2kpihsoIaPZb8858dicTSohjdTJdEy0I6E2XrNrEo71sJXfZO+aCBXs/hyZg5JHUNTAxVbUACEaRmV8ouXOcJFCxTBx5D6jmrzt0HLU+0kWGxgxYuRkvldRQP7eFc0BeDcARujblOn4qBZX8mWaXvpa7mxJWBVpvtclihCWbYZuywKyrkHeWF5W7ffltgzXKMg8EEzGWjbEJXf9nBX/XKldb6YRs1EVNiLSyzqxapUaoRe1iodOsVH7UBCmvX2S7lWMi0WpymQJDfhL7d/PGe8RXpECJSOdOc9xc=) 2026-02-17 02:45:09.149013 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG3rPfxBexviRb797rFVnMVpVNI/HWx9fkbErypiiAtbBPheOHtEnpWDINw35kw3HnaJiASoZTWZIHaof8aUcmM=) 2026-02-17 02:45:09.149022 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINbFRbfjwBDnu71RRwv5H6hfKlk2apYfxpYM4lUG9S3L) 2026-02-17 02:45:09.149031 | orchestrator | 2026-02-17 02:45:09.149040 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-17 02:45:09.149048 | orchestrator | Tuesday 17 February 2026 02:45:06 +0000 (0:00:01.121) 0:00:27.316 ****** 2026-02-17 02:45:09.149058 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKdjyvBHEyefs62vr5jm5eFWt/BMssjQCWPyU5p1xhexaBuS++XmrXkn2+KXElEdu1QHKC7eqPmwKhyeMEzF5EQ=) 2026-02-17 02:45:09.149195 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8wcKjcUQi8sDPWk84m7hIpfaeaXK3+FEerHHjgOzZ+GPqJOTFXS0dbDhFsCm5c/OiqtXA0SuJr4VtdRij3lCOr8WVZmRhv+aMXx2dQhCZ4hkfPi0dqRr2o37BthbEXPQE3QUt9lNQwH4g/hlljdPU15XdVMSJZQTh+QJgsKJu9mHdnS8j4cPi7HbjhoCcTxqbg5j2fVw9cPwf8fUU38d+WIvEYO7OqWxvjoZFql+4DYdTznGAxu9+aLmNSKKNPA+ed02+dsqjdf5KTAjJhR0X8NO7BndpNcL11Yh4gLfHLNqiqdWU5CMcvpmMINIDRh+mzZT5wy5SQ+lpWaNAmz7FWWsD5LNPcVgqkH+vKjSUMuHHVTf3KlZQrVN+aK2Km2JQKMEZKR4TtBZ9LypkIjeUJjzyPBPhRL324WZgMXtiuTlnkbXkOtlhIgS+PASUiXCfbImwIguOf6suJU8h+UpYsxBikQiMsomGu0zcGJ9OmWCoqw6XdPzG7Bi5Z7iGQ10=) 2026-02-17 02:45:09.149211 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC122YIR1Bz+OnpG8CrAc4BfG1pFduAZ2pGZeYTW2lRO) 2026-02-17 02:45:09.149222 | orchestrator | 2026-02-17 02:45:09.149233 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-17 02:45:09.149256 | orchestrator | Tuesday 17 February 2026 02:45:07 +0000 (0:00:01.234) 0:00:28.550 ****** 2026-02-17 02:45:09.149267 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-17 02:45:09.149277 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-17 02:45:09.149306 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-17 02:45:09.149317 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-17 02:45:09.149327 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-17 02:45:09.149337 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-17 02:45:09.149347 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-17 02:45:09.149357 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:45:09.149367 | orchestrator | 2026-02-17 02:45:09.149378 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-17 02:45:09.149387 | orchestrator | Tuesday 17 February 2026 02:45:07 +0000 (0:00:00.172) 0:00:28.723 ****** 2026-02-17 02:45:09.149397 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:45:09.149407 | orchestrator | 2026-02-17 02:45:09.149417 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-17 02:45:09.149433 | orchestrator | Tuesday 17 February 2026 02:45:07 +0000 (0:00:00.055) 0:00:28.779 ****** 2026-02-17 02:45:09.149443 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:45:09.149453 | orchestrator | 2026-02-17 02:45:09.149463 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-17 02:45:09.149473 | orchestrator | Tuesday 17 February 2026 02:45:08 +0000 (0:00:00.062) 0:00:28.841 ****** 2026-02-17 02:45:09.149483 | orchestrator | changed: [testbed-manager] 2026-02-17 02:45:09.149492 | orchestrator | 2026-02-17 02:45:09.149502 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:45:09.149512 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 02:45:09.149524 | orchestrator | 2026-02-17 02:45:09.149534 | orchestrator | 2026-02-17 02:45:09.149545 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:45:09.149554 | orchestrator | Tuesday 17 February 2026 02:45:08 +0000 (0:00:00.856) 0:00:29.697 ****** 2026-02-17 02:45:09.149563 | orchestrator | =============================================================================== 2026-02-17 02:45:09.149571 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.13s 2026-02-17 02:45:09.149580 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.63s 2026-02-17 02:45:09.149589 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-02-17 02:45:09.149597 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2026-02-17 02:45:09.149606 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-02-17 02:45:09.149614 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-17 02:45:09.149623 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-17 02:45:09.149632 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-17 02:45:09.149640 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-17 02:45:09.149648 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-17 02:45:09.149657 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-17 02:45:09.149665 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-17 02:45:09.149674 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-17 02:45:09.149682 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-17 02:45:09.149696 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-17 02:45:09.149704 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-17 02:45:09.149713 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.86s 2026-02-17 02:45:09.149721 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-02-17 02:45:09.149730 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-02-17 02:45:09.149739 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-02-17 02:45:09.554883 | orchestrator | + osism apply squid 2026-02-17 02:45:21.814721 | orchestrator | 2026-02-17 02:45:21 | INFO  | Task 63d4a768-8dab-4e43-ac4a-53d49b043117 (squid) was prepared for execution. 2026-02-17 02:45:21.814811 | orchestrator | 2026-02-17 02:45:21 | INFO  | It takes a moment until task 63d4a768-8dab-4e43-ac4a-53d49b043117 (squid) has been started and output is visible here. 2026-02-17 02:47:21.720375 | orchestrator | 2026-02-17 02:47:21.720479 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-17 02:47:21.720496 | orchestrator | 2026-02-17 02:47:21.720506 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-17 02:47:21.720517 | orchestrator | Tuesday 17 February 2026 02:45:26 +0000 (0:00:00.178) 0:00:00.178 ****** 2026-02-17 02:47:21.720526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 02:47:21.720537 | orchestrator | 2026-02-17 02:47:21.720547 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-17 02:47:21.720557 | orchestrator | Tuesday 17 February 2026 02:45:26 +0000 (0:00:00.090) 0:00:00.269 ****** 2026-02-17 02:47:21.720566 | orchestrator | ok: [testbed-manager] 2026-02-17 02:47:21.720577 | orchestrator | 2026-02-17 02:47:21.720588 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-17 02:47:21.720598 | orchestrator | Tuesday 17 February 2026 02:45:28 +0000 (0:00:01.596) 0:00:01.866 ****** 2026-02-17 02:47:21.720608 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-17 02:47:21.720617 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-17 02:47:21.720627 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-17 02:47:21.720639 | orchestrator | 2026-02-17 02:47:21.720651 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-17 02:47:21.720662 | orchestrator | Tuesday 17 February 2026 02:45:29 +0000 (0:00:01.234) 0:00:03.100 ****** 2026-02-17 02:47:21.720671 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-17 02:47:21.720681 | orchestrator | 2026-02-17 02:47:21.720691 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-17 02:47:21.720701 | orchestrator | Tuesday 17 February 2026 02:45:30 +0000 (0:00:01.145) 0:00:04.246 ****** 2026-02-17 02:47:21.720710 | orchestrator | ok: [testbed-manager] 2026-02-17 02:47:21.720720 | orchestrator | 2026-02-17 02:47:21.720730 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-17 02:47:21.720739 | orchestrator | Tuesday 17 February 2026 02:45:30 +0000 (0:00:00.389) 0:00:04.635 ****** 2026-02-17 02:47:21.720749 | orchestrator | changed: [testbed-manager] 2026-02-17 02:47:21.720758 | orchestrator | 2026-02-17 02:47:21.720768 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-17 02:47:21.720778 | orchestrator | Tuesday 17 February 2026 02:45:31 +0000 (0:00:00.938) 0:00:05.573 ****** 2026-02-17 02:47:21.720788 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-17 02:47:21.720803 | orchestrator | ok: [testbed-manager] 2026-02-17 02:47:21.720813 | orchestrator | 2026-02-17 02:47:21.720822 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-17 02:47:21.720862 | orchestrator | Tuesday 17 February 2026 02:46:04 +0000 (0:00:32.839) 0:00:38.413 ****** 2026-02-17 02:47:21.720869 | orchestrator | changed: [testbed-manager] 2026-02-17 02:47:21.720874 | orchestrator | 2026-02-17 02:47:21.720880 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-17 02:47:21.720886 | orchestrator | Tuesday 17 February 2026 02:46:20 +0000 (0:00:15.988) 0:00:54.402 ****** 2026-02-17 02:47:21.720892 | orchestrator | Pausing for 60 seconds 2026-02-17 02:47:21.720898 | orchestrator | changed: [testbed-manager] 2026-02-17 02:47:21.720904 | orchestrator | 2026-02-17 02:47:21.720910 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-17 02:47:21.720916 | orchestrator | Tuesday 17 February 2026 02:47:20 +0000 (0:01:00.093) 0:01:54.495 ****** 2026-02-17 02:47:21.720921 | orchestrator | ok: [testbed-manager] 2026-02-17 02:47:21.720927 | orchestrator | 2026-02-17 02:47:21.720933 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-17 02:47:21.720940 | orchestrator | Tuesday 17 February 2026 02:47:20 +0000 (0:00:00.082) 0:01:54.577 ****** 2026-02-17 02:47:21.720946 | orchestrator | changed: [testbed-manager] 2026-02-17 02:47:21.720953 | orchestrator | 2026-02-17 02:47:21.720960 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:47:21.720966 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:47:21.720973 | orchestrator | 2026-02-17 02:47:21.720980 | orchestrator | 2026-02-17 02:47:21.720987 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:47:21.720993 | orchestrator | Tuesday 17 February 2026 02:47:21 +0000 (0:00:00.637) 0:01:55.215 ****** 2026-02-17 02:47:21.721050 | orchestrator | =============================================================================== 2026-02-17 02:47:21.721057 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-17 02:47:21.721064 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.84s 2026-02-17 02:47:21.721070 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.99s 2026-02-17 02:47:21.721092 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.60s 2026-02-17 02:47:21.721099 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.23s 2026-02-17 02:47:21.721106 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.15s 2026-02-17 02:47:21.721113 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2026-02-17 02:47:21.721119 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-02-17 02:47:21.721126 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2026-02-17 02:47:21.721132 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-02-17 02:47:21.721138 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-02-17 02:47:22.322471 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-17 02:47:22.323594 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-17 02:47:22.381334 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-17 02:47:22.381448 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-17 02:47:22.388274 | orchestrator | + set -e 2026-02-17 02:47:22.388526 | orchestrator | + NAMESPACE=kolla/release 2026-02-17 02:47:22.388548 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-17 02:47:22.396245 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-17 02:47:22.465882 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-17 02:47:22.466785 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-17 02:47:34.887541 | orchestrator | 2026-02-17 02:47:34 | INFO  | Task 7d992916-0509-47cb-8621-3daf63768a12 (operator) was prepared for execution. 2026-02-17 02:47:34.887682 | orchestrator | 2026-02-17 02:47:34 | INFO  | It takes a moment until task 7d992916-0509-47cb-8621-3daf63768a12 (operator) has been started and output is visible here. 2026-02-17 02:47:51.830650 | orchestrator | 2026-02-17 02:47:51.830769 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-17 02:47:51.830786 | orchestrator | 2026-02-17 02:47:51.830798 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 02:47:51.830809 | orchestrator | Tuesday 17 February 2026 02:47:39 +0000 (0:00:00.161) 0:00:00.161 ****** 2026-02-17 02:47:51.830820 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:47:51.830832 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:47:51.830843 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:47:51.830854 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:47:51.830864 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:47:51.830875 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:47:51.830885 | orchestrator | 2026-02-17 02:47:51.830896 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-17 02:47:51.830907 | orchestrator | Tuesday 17 February 2026 02:47:43 +0000 (0:00:03.379) 0:00:03.540 ****** 2026-02-17 02:47:51.830917 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:47:51.830928 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:47:51.830938 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:47:51.830965 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:47:51.830976 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:47:51.830987 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:47:51.830997 | orchestrator | 2026-02-17 02:47:51.831077 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-17 02:47:51.831104 | orchestrator | 2026-02-17 02:47:51.831121 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-17 02:47:51.831138 | orchestrator | Tuesday 17 February 2026 02:47:43 +0000 (0:00:00.804) 0:00:04.345 ****** 2026-02-17 02:47:51.831156 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:47:51.831174 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:47:51.831191 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:47:51.831208 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:47:51.831226 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:47:51.831243 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:47:51.831261 | orchestrator | 2026-02-17 02:47:51.831279 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-17 02:47:51.831297 | orchestrator | Tuesday 17 February 2026 02:47:44 +0000 (0:00:00.195) 0:00:04.541 ****** 2026-02-17 02:47:51.831315 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:47:51.831332 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:47:51.831350 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:47:51.831368 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:47:51.831387 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:47:51.831406 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:47:51.831425 | orchestrator | 2026-02-17 02:47:51.831444 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-17 02:47:51.831463 | orchestrator | Tuesday 17 February 2026 02:47:44 +0000 (0:00:00.186) 0:00:04.727 ****** 2026-02-17 02:47:51.831481 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:47:51.831503 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:47:51.831521 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:47:51.831541 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:47:51.831559 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:47:51.831578 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:47:51.831601 | orchestrator | 2026-02-17 02:47:51.831627 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-17 02:47:51.831646 | orchestrator | Tuesday 17 February 2026 02:47:44 +0000 (0:00:00.643) 0:00:05.370 ****** 2026-02-17 02:47:51.831665 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:47:51.831682 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:47:51.831700 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:47:51.831717 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:47:51.831733 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:47:51.831749 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:47:51.831800 | orchestrator | 2026-02-17 02:47:51.831820 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-17 02:47:51.831838 | orchestrator | Tuesday 17 February 2026 02:47:45 +0000 (0:00:00.862) 0:00:06.233 ****** 2026-02-17 02:47:51.831857 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-17 02:47:51.831876 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-17 02:47:51.831893 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-17 02:47:51.831911 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-17 02:47:51.831930 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-17 02:47:51.831949 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-17 02:47:51.831967 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-17 02:47:51.831986 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-17 02:47:51.832000 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-17 02:47:51.832011 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-17 02:47:51.832053 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-17 02:47:51.832064 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-17 02:47:51.832075 | orchestrator | 2026-02-17 02:47:51.832085 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-17 02:47:51.832096 | orchestrator | Tuesday 17 February 2026 02:47:46 +0000 (0:00:01.170) 0:00:07.404 ****** 2026-02-17 02:47:51.832107 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:47:51.832117 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:47:51.832128 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:47:51.832139 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:47:51.832149 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:47:51.832160 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:47:51.832171 | orchestrator | 2026-02-17 02:47:51.832182 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-17 02:47:51.832194 | orchestrator | Tuesday 17 February 2026 02:47:48 +0000 (0:00:01.234) 0:00:08.639 ****** 2026-02-17 02:47:51.832204 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-17 02:47:51.832215 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-17 02:47:51.832226 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-17 02:47:51.832237 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-17 02:47:51.832271 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-17 02:47:51.832283 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-17 02:47:51.832294 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-17 02:47:51.832305 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-17 02:47:51.832315 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-17 02:47:51.832326 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-17 02:47:51.832337 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-17 02:47:51.832347 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-17 02:47:51.832358 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-17 02:47:51.832368 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-17 02:47:51.832379 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-17 02:47:51.832390 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-17 02:47:51.832400 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-17 02:47:51.832411 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-17 02:47:51.832422 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-17 02:47:51.832439 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-17 02:47:51.832481 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-17 02:47:51.832502 | orchestrator | 2026-02-17 02:47:51.832520 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-17 02:47:51.832540 | orchestrator | Tuesday 17 February 2026 02:47:49 +0000 (0:00:01.335) 0:00:09.975 ****** 2026-02-17 02:47:51.832558 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:47:51.832575 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:47:51.832592 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:47:51.832611 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:47:51.832629 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:47:51.832647 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:47:51.832665 | orchestrator | 2026-02-17 02:47:51.832683 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-17 02:47:51.832701 | orchestrator | Tuesday 17 February 2026 02:47:49 +0000 (0:00:00.190) 0:00:10.165 ****** 2026-02-17 02:47:51.832720 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:47:51.832738 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:47:51.832756 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:47:51.832774 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:47:51.832793 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:47:51.832812 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:47:51.832830 | orchestrator | 2026-02-17 02:47:51.832849 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-17 02:47:51.832868 | orchestrator | Tuesday 17 February 2026 02:47:49 +0000 (0:00:00.198) 0:00:10.364 ****** 2026-02-17 02:47:51.832885 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:47:51.832904 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:47:51.832923 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:47:51.832943 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:47:51.832961 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:47:51.832978 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:47:51.832989 | orchestrator | 2026-02-17 02:47:51.833000 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-17 02:47:51.833010 | orchestrator | Tuesday 17 February 2026 02:47:50 +0000 (0:00:00.612) 0:00:10.976 ****** 2026-02-17 02:47:51.833059 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:47:51.833070 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:47:51.833081 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:47:51.833091 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:47:51.833102 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:47:51.833112 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:47:51.833123 | orchestrator | 2026-02-17 02:47:51.833133 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-17 02:47:51.833144 | orchestrator | Tuesday 17 February 2026 02:47:50 +0000 (0:00:00.215) 0:00:11.192 ****** 2026-02-17 02:47:51.833155 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 02:47:51.833182 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:47:51.833193 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-17 02:47:51.833204 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:47:51.833215 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-17 02:47:51.833225 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:47:51.833236 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-17 02:47:51.833247 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:47:51.833257 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-17 02:47:51.833268 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:47:51.833278 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-17 02:47:51.833289 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:47:51.833300 | orchestrator | 2026-02-17 02:47:51.833310 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-17 02:47:51.833321 | orchestrator | Tuesday 17 February 2026 02:47:51 +0000 (0:00:00.770) 0:00:11.963 ****** 2026-02-17 02:47:51.833342 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:47:51.833353 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:47:51.833363 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:47:51.833374 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:47:51.833385 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:47:51.833395 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:47:51.833406 | orchestrator | 2026-02-17 02:47:51.833416 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-17 02:47:51.833427 | orchestrator | Tuesday 17 February 2026 02:47:51 +0000 (0:00:00.199) 0:00:12.162 ****** 2026-02-17 02:47:51.833438 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:47:51.833448 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:47:51.833459 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:47:51.833469 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:47:51.833493 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:47:53.169461 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:47:53.169627 | orchestrator | 2026-02-17 02:47:53.169647 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-17 02:47:53.169662 | orchestrator | Tuesday 17 February 2026 02:47:51 +0000 (0:00:00.173) 0:00:12.336 ****** 2026-02-17 02:47:53.169676 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:47:53.169688 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:47:53.169701 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:47:53.169715 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:47:53.169728 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:47:53.169740 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:47:53.169753 | orchestrator | 2026-02-17 02:47:53.169767 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-17 02:47:53.169780 | orchestrator | Tuesday 17 February 2026 02:47:51 +0000 (0:00:00.168) 0:00:12.504 ****** 2026-02-17 02:47:53.169793 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:47:53.169806 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:47:53.169838 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:47:53.169851 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:47:53.169864 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:47:53.169877 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:47:53.169890 | orchestrator | 2026-02-17 02:47:53.169903 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-17 02:47:53.169917 | orchestrator | Tuesday 17 February 2026 02:47:52 +0000 (0:00:00.672) 0:00:13.177 ****** 2026-02-17 02:47:53.169930 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:47:53.169943 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:47:53.169957 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:47:53.169970 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:47:53.169983 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:47:53.169996 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:47:53.170009 | orchestrator | 2026-02-17 02:47:53.170096 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:47:53.170111 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 02:47:53.170125 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 02:47:53.170138 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 02:47:53.170150 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 02:47:53.170163 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 02:47:53.170202 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 02:47:53.170215 | orchestrator | 2026-02-17 02:47:53.170228 | orchestrator | 2026-02-17 02:47:53.170241 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:47:53.170253 | orchestrator | Tuesday 17 February 2026 02:47:52 +0000 (0:00:00.234) 0:00:13.411 ****** 2026-02-17 02:47:53.170266 | orchestrator | =============================================================================== 2026-02-17 02:47:53.170278 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2026-02-17 02:47:53.170291 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.34s 2026-02-17 02:47:53.170304 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.23s 2026-02-17 02:47:53.170316 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2026-02-17 02:47:53.170329 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2026-02-17 02:47:53.170341 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2026-02-17 02:47:53.170354 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.77s 2026-02-17 02:47:53.170367 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-02-17 02:47:53.170378 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-02-17 02:47:53.170390 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-02-17 02:47:53.170402 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-02-17 02:47:53.170413 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2026-02-17 02:47:53.170424 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2026-02-17 02:47:53.170436 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-02-17 02:47:53.170447 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-02-17 02:47:53.170458 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2026-02-17 02:47:53.170469 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-02-17 02:47:53.170481 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-02-17 02:47:53.170492 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-17 02:47:53.514540 | orchestrator | + osism apply --environment custom facts 2026-02-17 02:47:55.649450 | orchestrator | 2026-02-17 02:47:55 | INFO  | Trying to run play facts in environment custom 2026-02-17 02:48:05.770951 | orchestrator | 2026-02-17 02:48:05 | INFO  | Task 631dec04-b7f6-4974-a074-bddb93ff98e6 (facts) was prepared for execution. 2026-02-17 02:48:05.771132 | orchestrator | 2026-02-17 02:48:05 | INFO  | It takes a moment until task 631dec04-b7f6-4974-a074-bddb93ff98e6 (facts) has been started and output is visible here. 2026-02-17 02:48:50.281908 | orchestrator | 2026-02-17 02:48:50.282075 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-17 02:48:50.282094 | orchestrator | 2026-02-17 02:48:50.282101 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-17 02:48:50.282109 | orchestrator | Tuesday 17 February 2026 02:48:10 +0000 (0:00:00.104) 0:00:00.104 ****** 2026-02-17 02:48:50.282114 | orchestrator | ok: [testbed-manager] 2026-02-17 02:48:50.282119 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:48:50.282124 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:48:50.282128 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:48:50.282132 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:48:50.282136 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:48:50.282156 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:48:50.282161 | orchestrator | 2026-02-17 02:48:50.282165 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-17 02:48:50.282168 | orchestrator | Tuesday 17 February 2026 02:48:11 +0000 (0:00:01.453) 0:00:01.558 ****** 2026-02-17 02:48:50.282172 | orchestrator | ok: [testbed-manager] 2026-02-17 02:48:50.282176 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:48:50.282180 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:48:50.282184 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:48:50.282187 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:48:50.282191 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:48:50.282195 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:48:50.282199 | orchestrator | 2026-02-17 02:48:50.282203 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-17 02:48:50.282206 | orchestrator | 2026-02-17 02:48:50.282210 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-17 02:48:50.282214 | orchestrator | Tuesday 17 February 2026 02:48:12 +0000 (0:00:01.312) 0:00:02.871 ****** 2026-02-17 02:48:50.282218 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:48:50.282222 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:48:50.282226 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:48:50.282230 | orchestrator | 2026-02-17 02:48:50.282233 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-17 02:48:50.282238 | orchestrator | Tuesday 17 February 2026 02:48:12 +0000 (0:00:00.116) 0:00:02.987 ****** 2026-02-17 02:48:50.282242 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:48:50.282246 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:48:50.282249 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:48:50.282253 | orchestrator | 2026-02-17 02:48:50.282256 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-17 02:48:50.282260 | orchestrator | Tuesday 17 February 2026 02:48:13 +0000 (0:00:00.226) 0:00:03.214 ****** 2026-02-17 02:48:50.282264 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:48:50.282268 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:48:50.282271 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:48:50.282275 | orchestrator | 2026-02-17 02:48:50.282279 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-17 02:48:50.282283 | orchestrator | Tuesday 17 February 2026 02:48:13 +0000 (0:00:00.236) 0:00:03.450 ****** 2026-02-17 02:48:50.282288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 02:48:50.282293 | orchestrator | 2026-02-17 02:48:50.282297 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-17 02:48:50.282301 | orchestrator | Tuesday 17 February 2026 02:48:13 +0000 (0:00:00.164) 0:00:03.614 ****** 2026-02-17 02:48:50.282304 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:48:50.282308 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:48:50.282312 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:48:50.282315 | orchestrator | 2026-02-17 02:48:50.282319 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-17 02:48:50.282323 | orchestrator | Tuesday 17 February 2026 02:48:14 +0000 (0:00:00.472) 0:00:04.087 ****** 2026-02-17 02:48:50.282327 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:48:50.282330 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:48:50.282334 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:48:50.282338 | orchestrator | 2026-02-17 02:48:50.282341 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-17 02:48:50.282345 | orchestrator | Tuesday 17 February 2026 02:48:14 +0000 (0:00:00.158) 0:00:04.246 ****** 2026-02-17 02:48:50.282349 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:48:50.282352 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:48:50.282356 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:48:50.282360 | orchestrator | 2026-02-17 02:48:50.282363 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-17 02:48:50.282371 | orchestrator | Tuesday 17 February 2026 02:48:15 +0000 (0:00:01.098) 0:00:05.345 ****** 2026-02-17 02:48:50.282375 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:48:50.282379 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:48:50.282382 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:48:50.282386 | orchestrator | 2026-02-17 02:48:50.282390 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-17 02:48:50.282393 | orchestrator | Tuesday 17 February 2026 02:48:15 +0000 (0:00:00.485) 0:00:05.831 ****** 2026-02-17 02:48:50.282397 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:48:50.282401 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:48:50.282404 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:48:50.282408 | orchestrator | 2026-02-17 02:48:50.282412 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-17 02:48:50.282447 | orchestrator | Tuesday 17 February 2026 02:48:16 +0000 (0:00:01.168) 0:00:06.999 ****** 2026-02-17 02:48:50.282451 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:48:50.282455 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:48:50.282459 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:48:50.282463 | orchestrator | 2026-02-17 02:48:50.282468 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-17 02:48:50.282472 | orchestrator | Tuesday 17 February 2026 02:48:33 +0000 (0:00:16.411) 0:00:23.410 ****** 2026-02-17 02:48:50.282477 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:48:50.282481 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:48:50.282485 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:48:50.282489 | orchestrator | 2026-02-17 02:48:50.282493 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-17 02:48:50.282510 | orchestrator | Tuesday 17 February 2026 02:48:33 +0000 (0:00:00.113) 0:00:23.524 ****** 2026-02-17 02:48:50.282515 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:48:50.282521 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:48:50.282528 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:48:50.282535 | orchestrator | 2026-02-17 02:48:50.282549 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-17 02:48:50.282555 | orchestrator | Tuesday 17 February 2026 02:48:41 +0000 (0:00:07.880) 0:00:31.404 ****** 2026-02-17 02:48:50.282561 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:48:50.282567 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:48:50.282573 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:48:50.282580 | orchestrator | 2026-02-17 02:48:50.282586 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-17 02:48:50.282592 | orchestrator | Tuesday 17 February 2026 02:48:41 +0000 (0:00:00.512) 0:00:31.916 ****** 2026-02-17 02:48:50.282598 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-17 02:48:50.282605 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-17 02:48:50.282612 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-17 02:48:50.282618 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-17 02:48:50.282624 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-17 02:48:50.282631 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-17 02:48:50.282637 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-17 02:48:50.282643 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-17 02:48:50.282649 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-17 02:48:50.282655 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-17 02:48:50.282662 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-17 02:48:50.282668 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-17 02:48:50.282673 | orchestrator | 2026-02-17 02:48:50.282679 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-17 02:48:50.282691 | orchestrator | Tuesday 17 February 2026 02:48:45 +0000 (0:00:03.423) 0:00:35.340 ****** 2026-02-17 02:48:50.282698 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:48:50.282705 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:48:50.282712 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:48:50.282718 | orchestrator | 2026-02-17 02:48:50.282725 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-17 02:48:50.282731 | orchestrator | 2026-02-17 02:48:50.282738 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-17 02:48:50.282745 | orchestrator | Tuesday 17 February 2026 02:48:46 +0000 (0:00:01.283) 0:00:36.623 ****** 2026-02-17 02:48:50.282751 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:48:50.282758 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:48:50.282763 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:48:50.282768 | orchestrator | ok: [testbed-manager] 2026-02-17 02:48:50.282772 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:48:50.282776 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:48:50.282780 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:48:50.282785 | orchestrator | 2026-02-17 02:48:50.282789 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:48:50.282794 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:48:50.282799 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:48:50.282805 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:48:50.282809 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:48:50.282814 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 02:48:50.282819 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 02:48:50.282823 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 02:48:50.282826 | orchestrator | 2026-02-17 02:48:50.282830 | orchestrator | 2026-02-17 02:48:50.282834 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:48:50.282838 | orchestrator | Tuesday 17 February 2026 02:48:50 +0000 (0:00:03.642) 0:00:40.266 ****** 2026-02-17 02:48:50.282842 | orchestrator | =============================================================================== 2026-02-17 02:48:50.282845 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.41s 2026-02-17 02:48:50.282849 | orchestrator | Install required packages (Debian) -------------------------------------- 7.88s 2026-02-17 02:48:50.282853 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.64s 2026-02-17 02:48:50.282856 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2026-02-17 02:48:50.282860 | orchestrator | Create custom facts directory ------------------------------------------- 1.45s 2026-02-17 02:48:50.282864 | orchestrator | Copy fact file ---------------------------------------------------------- 1.31s 2026-02-17 02:48:50.282872 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.28s 2026-02-17 02:48:50.561688 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.17s 2026-02-17 02:48:50.561760 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.10s 2026-02-17 02:48:50.561782 | orchestrator | Create custom facts directory ------------------------------------------- 0.51s 2026-02-17 02:48:50.561800 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-02-17 02:48:50.561804 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-02-17 02:48:50.561807 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-02-17 02:48:50.561811 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-02-17 02:48:50.561815 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-02-17 02:48:50.561820 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-02-17 02:48:50.561823 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-02-17 02:48:50.561827 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-02-17 02:48:50.921545 | orchestrator | + osism apply bootstrap 2026-02-17 02:49:03.063419 | orchestrator | 2026-02-17 02:49:03 | INFO  | Task 9a7beb70-c174-4d31-82d9-0adf921ed708 (bootstrap) was prepared for execution. 2026-02-17 02:49:03.063495 | orchestrator | 2026-02-17 02:49:03 | INFO  | It takes a moment until task 9a7beb70-c174-4d31-82d9-0adf921ed708 (bootstrap) has been started and output is visible here. 2026-02-17 02:49:20.168221 | orchestrator | 2026-02-17 02:49:20.168330 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-17 02:49:20.168345 | orchestrator | 2026-02-17 02:49:20.168356 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-17 02:49:20.168366 | orchestrator | Tuesday 17 February 2026 02:49:07 +0000 (0:00:00.157) 0:00:00.157 ****** 2026-02-17 02:49:20.168376 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:20.168386 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:20.168396 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:20.168406 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:20.168415 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:20.168424 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:20.168434 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:20.168452 | orchestrator | 2026-02-17 02:49:20.168468 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-17 02:49:20.168483 | orchestrator | 2026-02-17 02:49:20.168499 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-17 02:49:20.168515 | orchestrator | Tuesday 17 February 2026 02:49:08 +0000 (0:00:00.292) 0:00:00.449 ****** 2026-02-17 02:49:20.168531 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:20.168546 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:20.168561 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:20.168578 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:20.168593 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:20.168610 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:20.168626 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:20.168636 | orchestrator | 2026-02-17 02:49:20.168646 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-17 02:49:20.168656 | orchestrator | 2026-02-17 02:49:20.168665 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-17 02:49:20.168675 | orchestrator | Tuesday 17 February 2026 02:49:11 +0000 (0:00:03.610) 0:00:04.060 ****** 2026-02-17 02:49:20.168686 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-17 02:49:20.168696 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-17 02:49:20.168706 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-17 02:49:20.168716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-17 02:49:20.168726 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-17 02:49:20.168737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 02:49:20.168748 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-17 02:49:20.168759 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-17 02:49:20.168770 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-17 02:49:20.168806 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-17 02:49:20.168818 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-17 02:49:20.168828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 02:49:20.168839 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 02:49:20.168849 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 02:49:20.168860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-17 02:49:20.168871 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:49:20.168882 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 02:49:20.168893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 02:49:20.168904 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 02:49:20.168914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 02:49:20.168925 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-17 02:49:20.168936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 02:49:20.168948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 02:49:20.168958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 02:49:20.168969 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 02:49:20.168979 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 02:49:20.168989 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 02:49:20.169000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 02:49:20.169010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 02:49:20.169021 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 02:49:20.169031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 02:49:20.169042 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 02:49:20.169053 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 02:49:20.169086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 02:49:20.169096 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:49:20.169106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 02:49:20.169115 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 02:49:20.169124 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:49:20.169134 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-17 02:49:20.169143 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 02:49:20.169153 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 02:49:20.169162 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 02:49:20.169171 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 02:49:20.169180 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 02:49:20.169190 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 02:49:20.169199 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:49:20.169209 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:49:20.169236 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 02:49:20.169246 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 02:49:20.169255 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 02:49:20.169265 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 02:49:20.169274 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:49:20.169283 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 02:49:20.169293 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 02:49:20.169310 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 02:49:20.169335 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:49:20.169346 | orchestrator | 2026-02-17 02:49:20.169356 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-17 02:49:20.169365 | orchestrator | 2026-02-17 02:49:20.169375 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-17 02:49:20.169385 | orchestrator | Tuesday 17 February 2026 02:49:12 +0000 (0:00:00.557) 0:00:04.618 ****** 2026-02-17 02:49:20.169394 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:20.169428 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:20.169449 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:20.169459 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:20.169479 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:20.169499 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:20.169509 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:20.169519 | orchestrator | 2026-02-17 02:49:20.169528 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-17 02:49:20.169538 | orchestrator | Tuesday 17 February 2026 02:49:13 +0000 (0:00:01.262) 0:00:05.880 ****** 2026-02-17 02:49:20.169548 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:20.169557 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:20.169567 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:20.169576 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:20.169585 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:20.169595 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:20.169604 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:20.169614 | orchestrator | 2026-02-17 02:49:20.169623 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-17 02:49:20.169633 | orchestrator | Tuesday 17 February 2026 02:49:15 +0000 (0:00:01.392) 0:00:07.273 ****** 2026-02-17 02:49:20.169643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:49:20.169656 | orchestrator | 2026-02-17 02:49:20.169665 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-17 02:49:20.169675 | orchestrator | Tuesday 17 February 2026 02:49:15 +0000 (0:00:00.336) 0:00:07.610 ****** 2026-02-17 02:49:20.169685 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:49:20.169695 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:49:20.169704 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:49:20.169713 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:49:20.169723 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:49:20.169732 | orchestrator | changed: [testbed-manager] 2026-02-17 02:49:20.169742 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:49:20.169751 | orchestrator | 2026-02-17 02:49:20.169761 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-17 02:49:20.169770 | orchestrator | Tuesday 17 February 2026 02:49:17 +0000 (0:00:02.136) 0:00:09.746 ****** 2026-02-17 02:49:20.169780 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:49:20.169791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:49:20.169801 | orchestrator | 2026-02-17 02:49:20.169811 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-17 02:49:20.169820 | orchestrator | Tuesday 17 February 2026 02:49:17 +0000 (0:00:00.303) 0:00:10.049 ****** 2026-02-17 02:49:20.169830 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:49:20.169842 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:49:20.169859 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:49:20.169877 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:49:20.169894 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:49:20.169912 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:49:20.169940 | orchestrator | 2026-02-17 02:49:20.169967 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-17 02:49:20.169987 | orchestrator | Tuesday 17 February 2026 02:49:18 +0000 (0:00:01.030) 0:00:11.080 ****** 2026-02-17 02:49:20.170004 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:49:20.170103 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:49:20.170116 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:49:20.170126 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:49:20.170135 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:49:20.170145 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:49:20.170154 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:49:20.170164 | orchestrator | 2026-02-17 02:49:20.170173 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-17 02:49:20.170183 | orchestrator | Tuesday 17 February 2026 02:49:19 +0000 (0:00:00.703) 0:00:11.783 ****** 2026-02-17 02:49:20.170193 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:49:20.170202 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:49:20.170211 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:49:20.170221 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:49:20.170230 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:49:20.170240 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:49:20.170250 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:20.170259 | orchestrator | 2026-02-17 02:49:20.170269 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-17 02:49:20.170280 | orchestrator | Tuesday 17 February 2026 02:49:19 +0000 (0:00:00.442) 0:00:12.226 ****** 2026-02-17 02:49:20.170289 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:49:20.170299 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:49:20.170319 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:49:33.935269 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:49:33.935396 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:49:33.935413 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:49:33.935425 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:49:33.935436 | orchestrator | 2026-02-17 02:49:33.935449 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-17 02:49:33.935461 | orchestrator | Tuesday 17 February 2026 02:49:20 +0000 (0:00:00.260) 0:00:12.487 ****** 2026-02-17 02:49:33.935473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:49:33.935502 | orchestrator | 2026-02-17 02:49:33.935513 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-17 02:49:33.935525 | orchestrator | Tuesday 17 February 2026 02:49:20 +0000 (0:00:00.345) 0:00:12.832 ****** 2026-02-17 02:49:33.935536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:49:33.935546 | orchestrator | 2026-02-17 02:49:33.935557 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-17 02:49:33.935568 | orchestrator | Tuesday 17 February 2026 02:49:20 +0000 (0:00:00.343) 0:00:13.176 ****** 2026-02-17 02:49:33.935578 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:33.935590 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:33.935601 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.935611 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.935622 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:33.935633 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.935644 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.935654 | orchestrator | 2026-02-17 02:49:33.935665 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-17 02:49:33.935676 | orchestrator | Tuesday 17 February 2026 02:49:22 +0000 (0:00:01.540) 0:00:14.717 ****** 2026-02-17 02:49:33.935711 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:49:33.935725 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:49:33.935737 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:49:33.935749 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:49:33.935761 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:49:33.935773 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:49:33.935786 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:49:33.935798 | orchestrator | 2026-02-17 02:49:33.935810 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-17 02:49:33.935823 | orchestrator | Tuesday 17 February 2026 02:49:22 +0000 (0:00:00.294) 0:00:15.011 ****** 2026-02-17 02:49:33.935835 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.935847 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.935859 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:33.935872 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:33.935884 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:33.935895 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.935907 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.935919 | orchestrator | 2026-02-17 02:49:33.935932 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-17 02:49:33.935944 | orchestrator | Tuesday 17 February 2026 02:49:24 +0000 (0:00:01.441) 0:00:16.453 ****** 2026-02-17 02:49:33.935957 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:49:33.935969 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:49:33.935981 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:49:33.935993 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:49:33.936006 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:49:33.936018 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:49:33.936031 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:49:33.936043 | orchestrator | 2026-02-17 02:49:33.936055 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-17 02:49:33.936067 | orchestrator | Tuesday 17 February 2026 02:49:24 +0000 (0:00:00.371) 0:00:16.824 ****** 2026-02-17 02:49:33.936102 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.936114 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:49:33.936124 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:49:33.936135 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:49:33.936145 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:49:33.936156 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:49:33.936175 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:49:33.936186 | orchestrator | 2026-02-17 02:49:33.936197 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-17 02:49:33.936207 | orchestrator | Tuesday 17 February 2026 02:49:25 +0000 (0:00:00.618) 0:00:17.442 ****** 2026-02-17 02:49:33.936218 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.936228 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:49:33.936240 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:49:33.936258 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:49:33.936276 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:49:33.936295 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:49:33.936315 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:49:33.936339 | orchestrator | 2026-02-17 02:49:33.936351 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-17 02:49:33.936361 | orchestrator | Tuesday 17 February 2026 02:49:26 +0000 (0:00:01.125) 0:00:18.567 ****** 2026-02-17 02:49:33.936372 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:33.936382 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:33.936393 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.936403 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:33.936418 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.936438 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.936471 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.936489 | orchestrator | 2026-02-17 02:49:33.936507 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-17 02:49:33.936535 | orchestrator | Tuesday 17 February 2026 02:49:27 +0000 (0:00:01.091) 0:00:19.659 ****** 2026-02-17 02:49:33.936601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:49:33.936617 | orchestrator | 2026-02-17 02:49:33.936628 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-17 02:49:33.936639 | orchestrator | Tuesday 17 February 2026 02:49:27 +0000 (0:00:00.330) 0:00:19.990 ****** 2026-02-17 02:49:33.936656 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:49:33.936674 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:49:33.936692 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:49:33.936709 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:49:33.936727 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:49:33.936745 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:49:33.936756 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:49:33.936767 | orchestrator | 2026-02-17 02:49:33.936778 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-17 02:49:33.936788 | orchestrator | Tuesday 17 February 2026 02:49:29 +0000 (0:00:01.354) 0:00:21.345 ****** 2026-02-17 02:49:33.936799 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.936809 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.936820 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.936830 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.936841 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:33.936852 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:33.936862 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:33.936872 | orchestrator | 2026-02-17 02:49:33.936883 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-17 02:49:33.936894 | orchestrator | Tuesday 17 February 2026 02:49:29 +0000 (0:00:00.267) 0:00:21.612 ****** 2026-02-17 02:49:33.936904 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.936915 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.936925 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.936935 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.936946 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:33.936956 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:33.936966 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:33.936977 | orchestrator | 2026-02-17 02:49:33.936988 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-17 02:49:33.936998 | orchestrator | Tuesday 17 February 2026 02:49:29 +0000 (0:00:00.275) 0:00:21.887 ****** 2026-02-17 02:49:33.937009 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.937019 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.937029 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.937040 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.937050 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:33.937061 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:33.937097 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:33.937110 | orchestrator | 2026-02-17 02:49:33.937120 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-17 02:49:33.937131 | orchestrator | Tuesday 17 February 2026 02:49:29 +0000 (0:00:00.255) 0:00:22.143 ****** 2026-02-17 02:49:33.937158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:49:33.937171 | orchestrator | 2026-02-17 02:49:33.937182 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-17 02:49:33.937196 | orchestrator | Tuesday 17 February 2026 02:49:30 +0000 (0:00:00.322) 0:00:22.466 ****** 2026-02-17 02:49:33.937216 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.937237 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.937271 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.937289 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.937300 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:33.937311 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:33.937321 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:33.937331 | orchestrator | 2026-02-17 02:49:33.937342 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-17 02:49:33.937353 | orchestrator | Tuesday 17 February 2026 02:49:30 +0000 (0:00:00.536) 0:00:23.002 ****** 2026-02-17 02:49:33.937364 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:49:33.937374 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:49:33.937384 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:49:33.937395 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:49:33.937405 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:49:33.937416 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:49:33.937426 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:49:33.937436 | orchestrator | 2026-02-17 02:49:33.937447 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-17 02:49:33.937458 | orchestrator | Tuesday 17 February 2026 02:49:31 +0000 (0:00:00.280) 0:00:23.283 ****** 2026-02-17 02:49:33.937468 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.937479 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.937492 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.937510 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.937529 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:49:33.937548 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:49:33.937563 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:49:33.937574 | orchestrator | 2026-02-17 02:49:33.937585 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-17 02:49:33.937596 | orchestrator | Tuesday 17 February 2026 02:49:32 +0000 (0:00:01.093) 0:00:24.376 ****** 2026-02-17 02:49:33.937606 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.937616 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.937627 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.937637 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:49:33.937648 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.937658 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:49:33.937669 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:49:33.937679 | orchestrator | 2026-02-17 02:49:33.937690 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-17 02:49:33.937700 | orchestrator | Tuesday 17 February 2026 02:49:32 +0000 (0:00:00.595) 0:00:24.972 ****** 2026-02-17 02:49:33.937711 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:49:33.937721 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:49:33.937732 | orchestrator | ok: [testbed-manager] 2026-02-17 02:49:33.937752 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:49:33.937773 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:50:14.432583 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:50:14.432691 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:50:14.432702 | orchestrator | 2026-02-17 02:50:14.432710 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-17 02:50:14.432718 | orchestrator | Tuesday 17 February 2026 02:49:33 +0000 (0:00:01.175) 0:00:26.148 ****** 2026-02-17 02:50:14.432724 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.432731 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.432737 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.432743 | orchestrator | changed: [testbed-manager] 2026-02-17 02:50:14.432750 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:50:14.432756 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:50:14.432762 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:50:14.432769 | orchestrator | 2026-02-17 02:50:14.432775 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-17 02:50:14.432781 | orchestrator | Tuesday 17 February 2026 02:49:48 +0000 (0:00:14.747) 0:00:40.895 ****** 2026-02-17 02:50:14.432787 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.432810 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.432817 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.432823 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.432829 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.432834 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.432840 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.432846 | orchestrator | 2026-02-17 02:50:14.432852 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-17 02:50:14.432858 | orchestrator | Tuesday 17 February 2026 02:49:48 +0000 (0:00:00.265) 0:00:41.160 ****** 2026-02-17 02:50:14.432864 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.432870 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.432876 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.432890 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.432896 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.432902 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.432908 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.432914 | orchestrator | 2026-02-17 02:50:14.432920 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-17 02:50:14.432927 | orchestrator | Tuesday 17 February 2026 02:49:49 +0000 (0:00:00.247) 0:00:41.408 ****** 2026-02-17 02:50:14.432932 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.432938 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.432944 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.432950 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.432956 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.432962 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.432969 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.432975 | orchestrator | 2026-02-17 02:50:14.432981 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-17 02:50:14.432987 | orchestrator | Tuesday 17 February 2026 02:49:49 +0000 (0:00:00.284) 0:00:41.693 ****** 2026-02-17 02:50:14.432994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:50:14.433002 | orchestrator | 2026-02-17 02:50:14.433009 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-17 02:50:14.433015 | orchestrator | Tuesday 17 February 2026 02:49:49 +0000 (0:00:00.298) 0:00:41.991 ****** 2026-02-17 02:50:14.433021 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.433027 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.433032 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.433038 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.433044 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.433050 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.433056 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.433062 | orchestrator | 2026-02-17 02:50:14.433068 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-17 02:50:14.433074 | orchestrator | Tuesday 17 February 2026 02:49:51 +0000 (0:00:01.606) 0:00:43.598 ****** 2026-02-17 02:50:14.433080 | orchestrator | changed: [testbed-manager] 2026-02-17 02:50:14.433087 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:50:14.433132 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:50:14.433140 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:50:14.433147 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:50:14.433154 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:50:14.433161 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:50:14.433168 | orchestrator | 2026-02-17 02:50:14.433175 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-17 02:50:14.433195 | orchestrator | Tuesday 17 February 2026 02:49:52 +0000 (0:00:01.117) 0:00:44.715 ****** 2026-02-17 02:50:14.433202 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.433209 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.433216 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.433228 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.433235 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.433242 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.433249 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.433256 | orchestrator | 2026-02-17 02:50:14.433263 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-17 02:50:14.433270 | orchestrator | Tuesday 17 February 2026 02:49:53 +0000 (0:00:00.890) 0:00:45.605 ****** 2026-02-17 02:50:14.433278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:50:14.433287 | orchestrator | 2026-02-17 02:50:14.433294 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-17 02:50:14.433302 | orchestrator | Tuesday 17 February 2026 02:49:53 +0000 (0:00:00.377) 0:00:45.983 ****** 2026-02-17 02:50:14.433309 | orchestrator | changed: [testbed-manager] 2026-02-17 02:50:14.433316 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:50:14.433322 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:50:14.433329 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:50:14.433337 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:50:14.433344 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:50:14.433351 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:50:14.433358 | orchestrator | 2026-02-17 02:50:14.433378 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-17 02:50:14.433385 | orchestrator | Tuesday 17 February 2026 02:49:54 +0000 (0:00:01.065) 0:00:47.049 ****** 2026-02-17 02:50:14.433392 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:50:14.433399 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:50:14.433406 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:50:14.433413 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:50:14.433420 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:50:14.433427 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:50:14.433433 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:50:14.433440 | orchestrator | 2026-02-17 02:50:14.433447 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-17 02:50:14.433454 | orchestrator | Tuesday 17 February 2026 02:49:55 +0000 (0:00:00.263) 0:00:47.312 ****** 2026-02-17 02:50:14.433461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:50:14.433468 | orchestrator | 2026-02-17 02:50:14.433475 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-17 02:50:14.433482 | orchestrator | Tuesday 17 February 2026 02:49:55 +0000 (0:00:00.323) 0:00:47.635 ****** 2026-02-17 02:50:14.433489 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.433496 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.433502 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.433508 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.433514 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.433520 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.433526 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.433532 | orchestrator | 2026-02-17 02:50:14.433538 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-17 02:50:14.433544 | orchestrator | Tuesday 17 February 2026 02:49:56 +0000 (0:00:01.450) 0:00:49.086 ****** 2026-02-17 02:50:14.433550 | orchestrator | changed: [testbed-manager] 2026-02-17 02:50:14.433556 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:50:14.433562 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:50:14.433568 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:50:14.433574 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:50:14.433580 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:50:14.433586 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:50:14.433596 | orchestrator | 2026-02-17 02:50:14.433603 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-17 02:50:14.433609 | orchestrator | Tuesday 17 February 2026 02:49:57 +0000 (0:00:01.107) 0:00:50.193 ****** 2026-02-17 02:50:14.433615 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:50:14.433621 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:50:14.433627 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:50:14.433633 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:50:14.433639 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:50:14.433645 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:50:14.433651 | orchestrator | changed: [testbed-manager] 2026-02-17 02:50:14.433657 | orchestrator | 2026-02-17 02:50:14.433663 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-17 02:50:14.433669 | orchestrator | Tuesday 17 February 2026 02:50:11 +0000 (0:00:13.240) 0:01:03.434 ****** 2026-02-17 02:50:14.433675 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.433681 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.433687 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.433693 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.433699 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.433705 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.433711 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.433717 | orchestrator | 2026-02-17 02:50:14.433723 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-17 02:50:14.433729 | orchestrator | Tuesday 17 February 2026 02:50:12 +0000 (0:00:01.467) 0:01:04.901 ****** 2026-02-17 02:50:14.433735 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.433741 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.433747 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.433753 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.433759 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.433765 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.433771 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.433777 | orchestrator | 2026-02-17 02:50:14.433783 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-17 02:50:14.433789 | orchestrator | Tuesday 17 February 2026 02:50:13 +0000 (0:00:00.910) 0:01:05.811 ****** 2026-02-17 02:50:14.433800 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.433806 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.433812 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.433818 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.433824 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.433830 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.433836 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.433842 | orchestrator | 2026-02-17 02:50:14.433848 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-17 02:50:14.433854 | orchestrator | Tuesday 17 February 2026 02:50:13 +0000 (0:00:00.247) 0:01:06.058 ****** 2026-02-17 02:50:14.433860 | orchestrator | ok: [testbed-manager] 2026-02-17 02:50:14.433866 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:50:14.433871 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:50:14.433877 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:50:14.433883 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:50:14.433889 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:50:14.433895 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:50:14.433901 | orchestrator | 2026-02-17 02:50:14.433907 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-17 02:50:14.433913 | orchestrator | Tuesday 17 February 2026 02:50:14 +0000 (0:00:00.249) 0:01:06.308 ****** 2026-02-17 02:50:14.433920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:50:14.433926 | orchestrator | 2026-02-17 02:50:14.433937 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-17 02:52:38.474925 | orchestrator | Tuesday 17 February 2026 02:50:14 +0000 (0:00:00.336) 0:01:06.645 ****** 2026-02-17 02:52:38.475035 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:38.475049 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:38.475058 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:38.475067 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:38.475076 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:38.475084 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:38.475093 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:38.475101 | orchestrator | 2026-02-17 02:52:38.475111 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-17 02:52:38.475120 | orchestrator | Tuesday 17 February 2026 02:50:16 +0000 (0:00:01.849) 0:01:08.494 ****** 2026-02-17 02:52:38.475129 | orchestrator | changed: [testbed-manager] 2026-02-17 02:52:38.475139 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:52:38.475147 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:52:38.475156 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:52:38.475164 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:52:38.475173 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:52:38.475181 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:52:38.475214 | orchestrator | 2026-02-17 02:52:38.475224 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-17 02:52:38.475233 | orchestrator | Tuesday 17 February 2026 02:50:16 +0000 (0:00:00.686) 0:01:09.181 ****** 2026-02-17 02:52:38.475242 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:38.475250 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:38.475259 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:38.475267 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:38.475276 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:38.475284 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:38.475293 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:38.475301 | orchestrator | 2026-02-17 02:52:38.475311 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-17 02:52:38.475320 | orchestrator | Tuesday 17 February 2026 02:50:17 +0000 (0:00:00.264) 0:01:09.446 ****** 2026-02-17 02:52:38.475341 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:38.475359 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:38.475368 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:38.475376 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:38.475384 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:38.475393 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:38.475401 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:38.475410 | orchestrator | 2026-02-17 02:52:38.475419 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-17 02:52:38.475427 | orchestrator | Tuesday 17 February 2026 02:50:18 +0000 (0:00:01.268) 0:01:10.714 ****** 2026-02-17 02:52:38.475438 | orchestrator | changed: [testbed-manager] 2026-02-17 02:52:38.475447 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:52:38.475458 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:52:38.475467 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:52:38.475476 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:52:38.475486 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:52:38.475496 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:52:38.475505 | orchestrator | 2026-02-17 02:52:38.475519 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-17 02:52:38.475529 | orchestrator | Tuesday 17 February 2026 02:50:20 +0000 (0:00:01.849) 0:01:12.564 ****** 2026-02-17 02:52:38.475539 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:38.475549 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:38.475558 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:38.475568 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:38.475577 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:38.475587 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:38.475597 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:38.475607 | orchestrator | 2026-02-17 02:52:38.475617 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-17 02:52:38.475650 | orchestrator | Tuesday 17 February 2026 02:50:22 +0000 (0:00:02.508) 0:01:15.073 ****** 2026-02-17 02:52:38.475659 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:38.475669 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:38.475679 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:38.475688 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:38.475697 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:38.475707 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:38.475716 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:38.475725 | orchestrator | 2026-02-17 02:52:38.475735 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-17 02:52:38.475744 | orchestrator | Tuesday 17 February 2026 02:50:56 +0000 (0:00:33.561) 0:01:48.634 ****** 2026-02-17 02:52:38.475752 | orchestrator | changed: [testbed-manager] 2026-02-17 02:52:38.475760 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:52:38.475779 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:52:38.475796 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:52:38.475805 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:52:38.475813 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:52:38.475821 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:52:38.475830 | orchestrator | 2026-02-17 02:52:38.475838 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-17 02:52:38.475847 | orchestrator | Tuesday 17 February 2026 02:52:20 +0000 (0:01:23.884) 0:03:12.519 ****** 2026-02-17 02:52:38.475856 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:38.475864 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:38.475873 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:38.475881 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:38.475889 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:38.475898 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:38.475906 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:38.475914 | orchestrator | 2026-02-17 02:52:38.475923 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-17 02:52:38.475932 | orchestrator | Tuesday 17 February 2026 02:52:21 +0000 (0:00:01.580) 0:03:14.099 ****** 2026-02-17 02:52:38.475940 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:38.475948 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:38.475957 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:38.475965 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:38.475973 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:38.475982 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:38.475990 | orchestrator | changed: [testbed-manager] 2026-02-17 02:52:38.475998 | orchestrator | 2026-02-17 02:52:38.476007 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-17 02:52:38.476015 | orchestrator | Tuesday 17 February 2026 02:52:36 +0000 (0:00:14.300) 0:03:28.400 ****** 2026-02-17 02:52:38.476054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-17 02:52:38.476083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-17 02:52:38.476103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-17 02:52:38.476114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-17 02:52:38.476123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-17 02:52:38.476132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-17 02:52:38.476145 | orchestrator | 2026-02-17 02:52:38.476161 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-17 02:52:38.476175 | orchestrator | Tuesday 17 February 2026 02:52:36 +0000 (0:00:00.473) 0:03:28.874 ****** 2026-02-17 02:52:38.476212 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-17 02:52:38.476227 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:52:38.476239 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-17 02:52:38.476253 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-17 02:52:38.476268 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:52:38.476282 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:52:38.476295 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-17 02:52:38.476304 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:52:38.476313 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 02:52:38.476321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 02:52:38.476330 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 02:52:38.476348 | orchestrator | 2026-02-17 02:52:38.476358 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-17 02:52:38.476375 | orchestrator | Tuesday 17 February 2026 02:52:38 +0000 (0:00:01.723) 0:03:30.597 ****** 2026-02-17 02:52:38.476384 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-17 02:52:38.476394 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-17 02:52:38.476402 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-17 02:52:38.476411 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-17 02:52:38.476419 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-17 02:52:38.476435 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-17 02:52:43.221637 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-17 02:52:43.221738 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-17 02:52:43.221779 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-17 02:52:43.221793 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-17 02:52:43.221805 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-17 02:52:43.221816 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-17 02:52:43.221826 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-17 02:52:43.221837 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-17 02:52:43.221848 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-17 02:52:43.221877 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:52:43.221900 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-17 02:52:43.221912 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-17 02:52:43.221923 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-17 02:52:43.221933 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-17 02:52:43.221944 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-17 02:52:43.221955 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:52:43.221965 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-17 02:52:43.221977 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-17 02:52:43.221987 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-17 02:52:43.221998 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-17 02:52:43.222008 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-17 02:52:43.222076 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-17 02:52:43.222091 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-17 02:52:43.222102 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-17 02:52:43.222112 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-17 02:52:43.222123 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-17 02:52:43.222134 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-17 02:52:43.222144 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-17 02:52:43.222155 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-17 02:52:43.222167 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-17 02:52:43.222224 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-17 02:52:43.222238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-17 02:52:43.222251 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-17 02:52:43.222263 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-17 02:52:43.222275 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-17 02:52:43.222297 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-17 02:52:43.222309 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:52:43.222320 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:52:43.222331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-17 02:52:43.222341 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-17 02:52:43.222352 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-17 02:52:43.222362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-17 02:52:43.222373 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-17 02:52:43.222402 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-17 02:52:43.222414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-17 02:52:43.222425 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-17 02:52:43.222436 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-17 02:52:43.222446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-17 02:52:43.222457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-17 02:52:43.222468 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-17 02:52:43.222478 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-17 02:52:43.222489 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-17 02:52:43.222500 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-17 02:52:43.222510 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-17 02:52:43.222521 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-17 02:52:43.222531 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-17 02:52:43.222542 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-17 02:52:43.222552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-17 02:52:43.222563 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-17 02:52:43.222573 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-17 02:52:43.222584 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-17 02:52:43.222594 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-17 02:52:43.222605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-17 02:52:43.222616 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-17 02:52:43.222626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-17 02:52:43.222637 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-17 02:52:43.222648 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-17 02:52:43.222659 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-17 02:52:43.222678 | orchestrator | 2026-02-17 02:52:43.222690 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-17 02:52:43.222701 | orchestrator | Tuesday 17 February 2026 02:52:42 +0000 (0:00:03.681) 0:03:34.279 ****** 2026-02-17 02:52:43.222712 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-17 02:52:43.222723 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-17 02:52:43.222733 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-17 02:52:43.222744 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-17 02:52:43.222759 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-17 02:52:43.222770 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-17 02:52:43.222781 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-17 02:52:43.222792 | orchestrator | 2026-02-17 02:52:43.222803 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-17 02:52:43.222813 | orchestrator | Tuesday 17 February 2026 02:52:42 +0000 (0:00:00.635) 0:03:34.915 ****** 2026-02-17 02:52:43.222824 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-17 02:52:43.222835 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:52:43.222845 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-17 02:52:43.222856 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-17 02:52:43.222867 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:52:43.222878 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:52:43.222888 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-17 02:52:43.222899 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:52:43.222910 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-17 02:52:43.222920 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-17 02:52:43.222939 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-17 02:52:57.487397 | orchestrator | 2026-02-17 02:52:57.487482 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-17 02:52:57.487491 | orchestrator | Tuesday 17 February 2026 02:52:43 +0000 (0:00:00.519) 0:03:35.435 ****** 2026-02-17 02:52:57.487497 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-17 02:52:57.487504 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:52:57.487511 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-17 02:52:57.487517 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-17 02:52:57.487523 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:52:57.487528 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:52:57.487534 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-17 02:52:57.487539 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:52:57.487545 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-17 02:52:57.487550 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-17 02:52:57.487556 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-17 02:52:57.487561 | orchestrator | 2026-02-17 02:52:57.487567 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-17 02:52:57.487589 | orchestrator | Tuesday 17 February 2026 02:52:43 +0000 (0:00:00.635) 0:03:36.070 ****** 2026-02-17 02:52:57.487595 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-17 02:52:57.487600 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:52:57.487605 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-17 02:52:57.487611 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-17 02:52:57.487616 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:52:57.487622 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:52:57.487627 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-17 02:52:57.487632 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:52:57.487638 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-17 02:52:57.487643 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-17 02:52:57.487649 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-17 02:52:57.487654 | orchestrator | 2026-02-17 02:52:57.487660 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-17 02:52:57.487665 | orchestrator | Tuesday 17 February 2026 02:52:44 +0000 (0:00:00.633) 0:03:36.704 ****** 2026-02-17 02:52:57.487670 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:52:57.487675 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:52:57.487681 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:52:57.487686 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:52:57.487691 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:52:57.487697 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:52:57.487702 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:52:57.487707 | orchestrator | 2026-02-17 02:52:57.487712 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-17 02:52:57.487718 | orchestrator | Tuesday 17 February 2026 02:52:44 +0000 (0:00:00.369) 0:03:37.073 ****** 2026-02-17 02:52:57.487723 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:57.487729 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:57.487735 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:57.487740 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:57.487745 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:57.487751 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:57.487756 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:57.487761 | orchestrator | 2026-02-17 02:52:57.487767 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-17 02:52:57.487772 | orchestrator | Tuesday 17 February 2026 02:52:50 +0000 (0:00:05.711) 0:03:42.785 ****** 2026-02-17 02:52:57.487778 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-17 02:52:57.487784 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-17 02:52:57.487789 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:52:57.487794 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-17 02:52:57.487800 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:52:57.487805 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:52:57.487810 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-17 02:52:57.487816 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-17 02:52:57.487821 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:52:57.487827 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-17 02:52:57.487845 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:52:57.487851 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:52:57.487856 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-17 02:52:57.487861 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:52:57.487871 | orchestrator | 2026-02-17 02:52:57.487877 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-17 02:52:57.487882 | orchestrator | Tuesday 17 February 2026 02:52:50 +0000 (0:00:00.352) 0:03:43.138 ****** 2026-02-17 02:52:57.487887 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-17 02:52:57.487893 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-17 02:52:57.487898 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-17 02:52:57.487916 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-17 02:52:57.487922 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-17 02:52:57.487927 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-17 02:52:57.487932 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-17 02:52:57.487937 | orchestrator | 2026-02-17 02:52:57.487943 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-17 02:52:57.487948 | orchestrator | Tuesday 17 February 2026 02:52:52 +0000 (0:00:01.871) 0:03:45.010 ****** 2026-02-17 02:52:57.487956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:52:57.487964 | orchestrator | 2026-02-17 02:52:57.487971 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-17 02:52:57.487977 | orchestrator | Tuesday 17 February 2026 02:52:53 +0000 (0:00:00.598) 0:03:45.608 ****** 2026-02-17 02:52:57.487983 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:57.487989 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:57.487995 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:57.488001 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:57.488007 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:57.488013 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:57.488019 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:57.488036 | orchestrator | 2026-02-17 02:52:57.488049 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-17 02:52:57.488056 | orchestrator | Tuesday 17 February 2026 02:52:54 +0000 (0:00:01.169) 0:03:46.777 ****** 2026-02-17 02:52:57.488062 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:57.488068 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:57.488074 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:57.488080 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:57.488086 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:57.488092 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:57.488098 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:57.488104 | orchestrator | 2026-02-17 02:52:57.488110 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-17 02:52:57.488116 | orchestrator | Tuesday 17 February 2026 02:52:55 +0000 (0:00:00.636) 0:03:47.413 ****** 2026-02-17 02:52:57.488123 | orchestrator | changed: [testbed-manager] 2026-02-17 02:52:57.488129 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:52:57.488135 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:52:57.488141 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:52:57.488147 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:52:57.488153 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:52:57.488159 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:52:57.488165 | orchestrator | 2026-02-17 02:52:57.488171 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-17 02:52:57.488178 | orchestrator | Tuesday 17 February 2026 02:52:55 +0000 (0:00:00.613) 0:03:48.027 ****** 2026-02-17 02:52:57.488183 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:52:57.488190 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:52:57.488196 | orchestrator | ok: [testbed-manager] 2026-02-17 02:52:57.488223 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:52:57.488229 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:52:57.488235 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:52:57.488241 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:52:57.488247 | orchestrator | 2026-02-17 02:52:57.488254 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-17 02:52:57.488265 | orchestrator | Tuesday 17 February 2026 02:52:56 +0000 (0:00:00.659) 0:03:48.686 ****** 2026-02-17 02:52:57.488277 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771295302.0355685, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:52:57.488286 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771295305.3913949, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:52:57.488293 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771295312.301027, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:52:57.488313 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771295305.2319727, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577375 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771295294.003758, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577471 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771295314.5378847, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577484 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771295311.3323078, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577516 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577537 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577546 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577555 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577585 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577594 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577603 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 02:53:02.577618 | orchestrator | 2026-02-17 02:53:02.577629 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-17 02:53:02.577639 | orchestrator | Tuesday 17 February 2026 02:52:57 +0000 (0:00:01.013) 0:03:49.699 ****** 2026-02-17 02:53:02.577647 | orchestrator | changed: [testbed-manager] 2026-02-17 02:53:02.577656 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:53:02.577664 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:53:02.577672 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:53:02.577680 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:53:02.577688 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:53:02.577696 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:53:02.577704 | orchestrator | 2026-02-17 02:53:02.577712 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-17 02:53:02.577720 | orchestrator | Tuesday 17 February 2026 02:52:58 +0000 (0:00:01.140) 0:03:50.839 ****** 2026-02-17 02:53:02.577728 | orchestrator | changed: [testbed-manager] 2026-02-17 02:53:02.577736 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:53:02.577744 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:53:02.577752 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:53:02.577759 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:53:02.577767 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:53:02.577775 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:53:02.577782 | orchestrator | 2026-02-17 02:53:02.577794 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-17 02:53:02.577803 | orchestrator | Tuesday 17 February 2026 02:52:59 +0000 (0:00:01.215) 0:03:52.055 ****** 2026-02-17 02:53:02.577810 | orchestrator | changed: [testbed-manager] 2026-02-17 02:53:02.577818 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:53:02.577826 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:53:02.577834 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:53:02.577841 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:53:02.577849 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:53:02.577857 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:53:02.577865 | orchestrator | 2026-02-17 02:53:02.577873 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-17 02:53:02.577886 | orchestrator | Tuesday 17 February 2026 02:53:01 +0000 (0:00:01.202) 0:03:53.258 ****** 2026-02-17 02:53:02.577899 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:53:02.577912 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:53:02.577926 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:53:02.577939 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:53:02.577952 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:53:02.577965 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:53:02.577978 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:53:02.577992 | orchestrator | 2026-02-17 02:53:02.578006 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-17 02:53:02.578103 | orchestrator | Tuesday 17 February 2026 02:53:01 +0000 (0:00:00.310) 0:03:53.568 ****** 2026-02-17 02:53:02.578132 | orchestrator | ok: [testbed-manager] 2026-02-17 02:53:02.578148 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:53:02.578162 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:53:02.578176 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:53:02.578305 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:53:02.578325 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:53:02.578337 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:53:02.578350 | orchestrator | 2026-02-17 02:53:02.578364 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-17 02:53:02.578378 | orchestrator | Tuesday 17 February 2026 02:53:02 +0000 (0:00:00.802) 0:03:54.371 ****** 2026-02-17 02:53:02.578392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:53:02.578421 | orchestrator | 2026-02-17 02:53:02.578435 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-17 02:53:02.578459 | orchestrator | Tuesday 17 February 2026 02:53:02 +0000 (0:00:00.426) 0:03:54.797 ****** 2026-02-17 02:54:20.258519 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:20.258696 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:54:20.258716 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:54:20.258728 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:54:20.258738 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:54:20.258749 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:54:20.258760 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:54:20.258809 | orchestrator | 2026-02-17 02:54:20.258824 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-17 02:54:20.258837 | orchestrator | Tuesday 17 February 2026 02:53:10 +0000 (0:00:07.670) 0:04:02.467 ****** 2026-02-17 02:54:20.258847 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:20.258858 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:20.258869 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:20.258879 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:20.258889 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:20.258900 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:20.258910 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:20.258920 | orchestrator | 2026-02-17 02:54:20.258930 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-17 02:54:20.258940 | orchestrator | Tuesday 17 February 2026 02:53:11 +0000 (0:00:01.176) 0:04:03.643 ****** 2026-02-17 02:54:20.258950 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:20.258961 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:20.258972 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:20.258982 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:20.258993 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:20.259003 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:20.259013 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:20.259024 | orchestrator | 2026-02-17 02:54:20.259035 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-17 02:54:20.259046 | orchestrator | Tuesday 17 February 2026 02:53:13 +0000 (0:00:01.941) 0:04:05.585 ****** 2026-02-17 02:54:20.259056 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:20.259067 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:20.259077 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:20.259088 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:20.259099 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:20.259109 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:20.259120 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:20.259130 | orchestrator | 2026-02-17 02:54:20.259142 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-17 02:54:20.259154 | orchestrator | Tuesday 17 February 2026 02:53:13 +0000 (0:00:00.313) 0:04:05.899 ****** 2026-02-17 02:54:20.259165 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:20.259175 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:20.259185 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:20.259195 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:20.259205 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:20.259216 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:20.259226 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:20.259237 | orchestrator | 2026-02-17 02:54:20.259247 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-17 02:54:20.259257 | orchestrator | Tuesday 17 February 2026 02:53:14 +0000 (0:00:00.333) 0:04:06.232 ****** 2026-02-17 02:54:20.259267 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:20.259303 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:20.259314 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:20.259358 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:20.259371 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:20.259380 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:20.259390 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:20.259400 | orchestrator | 2026-02-17 02:54:20.259411 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-17 02:54:20.259422 | orchestrator | Tuesday 17 February 2026 02:53:14 +0000 (0:00:00.334) 0:04:06.566 ****** 2026-02-17 02:54:20.259432 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:20.259444 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:20.259455 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:20.259463 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:20.259470 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:20.259476 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:20.259481 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:20.259487 | orchestrator | 2026-02-17 02:54:20.259494 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-17 02:54:20.259500 | orchestrator | Tuesday 17 February 2026 02:53:19 +0000 (0:00:05.527) 0:04:12.094 ****** 2026-02-17 02:54:20.259509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:54:20.259519 | orchestrator | 2026-02-17 02:54:20.259525 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-17 02:54:20.259531 | orchestrator | Tuesday 17 February 2026 02:53:20 +0000 (0:00:00.463) 0:04:12.557 ****** 2026-02-17 02:54:20.259538 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-17 02:54:20.259544 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-17 02:54:20.259551 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-17 02:54:20.259557 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-17 02:54:20.259563 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:54:20.259589 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-17 02:54:20.259596 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-17 02:54:20.259602 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:54:20.259608 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-17 02:54:20.259614 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-17 02:54:20.259620 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:54:20.259626 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-17 02:54:20.259632 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:54:20.259638 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-17 02:54:20.259644 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-17 02:54:20.259650 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-17 02:54:20.259679 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:54:20.259689 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:54:20.259699 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-17 02:54:20.259709 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-17 02:54:20.259719 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:54:20.259729 | orchestrator | 2026-02-17 02:54:20.259740 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-17 02:54:20.259750 | orchestrator | Tuesday 17 February 2026 02:53:20 +0000 (0:00:00.391) 0:04:12.949 ****** 2026-02-17 02:54:20.259763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:54:20.259773 | orchestrator | 2026-02-17 02:54:20.259784 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-17 02:54:20.259806 | orchestrator | Tuesday 17 February 2026 02:53:21 +0000 (0:00:00.475) 0:04:13.425 ****** 2026-02-17 02:54:20.259817 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-17 02:54:20.259825 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-17 02:54:20.259832 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:54:20.259838 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:54:20.259844 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-17 02:54:20.259851 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-17 02:54:20.259857 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:54:20.259863 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-17 02:54:20.259869 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:54:20.259875 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-17 02:54:20.259881 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:54:20.259887 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:54:20.259893 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-17 02:54:20.259899 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:54:20.259905 | orchestrator | 2026-02-17 02:54:20.259911 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-17 02:54:20.259917 | orchestrator | Tuesday 17 February 2026 02:53:21 +0000 (0:00:00.370) 0:04:13.795 ****** 2026-02-17 02:54:20.259924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:54:20.259930 | orchestrator | 2026-02-17 02:54:20.259936 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-17 02:54:20.259942 | orchestrator | Tuesday 17 February 2026 02:53:22 +0000 (0:00:00.512) 0:04:14.308 ****** 2026-02-17 02:54:20.259948 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:54:20.259954 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:54:20.259961 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:54:20.259970 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:54:20.259986 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:54:20.259997 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:54:20.260006 | orchestrator | changed: [testbed-manager] 2026-02-17 02:54:20.260015 | orchestrator | 2026-02-17 02:54:20.260025 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-17 02:54:20.260034 | orchestrator | Tuesday 17 February 2026 02:53:56 +0000 (0:00:34.420) 0:04:48.729 ****** 2026-02-17 02:54:20.260044 | orchestrator | changed: [testbed-manager] 2026-02-17 02:54:20.260055 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:54:20.260064 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:54:20.260074 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:54:20.260083 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:54:20.260093 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:54:20.260104 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:54:20.260113 | orchestrator | 2026-02-17 02:54:20.260123 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-17 02:54:20.260133 | orchestrator | Tuesday 17 February 2026 02:54:04 +0000 (0:00:08.309) 0:04:57.038 ****** 2026-02-17 02:54:20.260142 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:54:20.260152 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:54:20.260161 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:54:20.260172 | orchestrator | changed: [testbed-manager] 2026-02-17 02:54:20.260178 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:54:20.260183 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:54:20.260189 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:54:20.260195 | orchestrator | 2026-02-17 02:54:20.260200 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-17 02:54:20.260212 | orchestrator | Tuesday 17 February 2026 02:54:12 +0000 (0:00:07.635) 0:05:04.674 ****** 2026-02-17 02:54:20.260218 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:20.260224 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:20.260230 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:20.260235 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:20.260241 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:20.260247 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:20.260252 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:20.260259 | orchestrator | 2026-02-17 02:54:20.260270 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-17 02:54:20.260314 | orchestrator | Tuesday 17 February 2026 02:54:14 +0000 (0:00:01.702) 0:05:06.376 ****** 2026-02-17 02:54:20.260324 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:54:20.260333 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:54:20.260342 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:54:20.260352 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:54:20.260360 | orchestrator | changed: [testbed-manager] 2026-02-17 02:54:20.260369 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:54:20.260377 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:54:20.260387 | orchestrator | 2026-02-17 02:54:20.260407 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-17 02:54:32.129192 | orchestrator | Tuesday 17 February 2026 02:54:20 +0000 (0:00:06.091) 0:05:12.467 ****** 2026-02-17 02:54:32.129282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:54:32.129321 | orchestrator | 2026-02-17 02:54:32.129327 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-17 02:54:32.129331 | orchestrator | Tuesday 17 February 2026 02:54:20 +0000 (0:00:00.655) 0:05:13.123 ****** 2026-02-17 02:54:32.129336 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:54:32.129341 | orchestrator | changed: [testbed-manager] 2026-02-17 02:54:32.129345 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:54:32.129348 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:54:32.129352 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:54:32.129356 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:54:32.129360 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:54:32.129363 | orchestrator | 2026-02-17 02:54:32.129367 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-17 02:54:32.129371 | orchestrator | Tuesday 17 February 2026 02:54:21 +0000 (0:00:00.733) 0:05:13.856 ****** 2026-02-17 02:54:32.129375 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:32.129380 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:32.129384 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:32.129387 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:32.129391 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:32.129395 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:32.129398 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:32.129402 | orchestrator | 2026-02-17 02:54:32.129406 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-17 02:54:32.129410 | orchestrator | Tuesday 17 February 2026 02:54:23 +0000 (0:00:01.644) 0:05:15.501 ****** 2026-02-17 02:54:32.129413 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:54:32.129417 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:54:32.129421 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:54:32.129424 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:54:32.129428 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:54:32.129432 | orchestrator | changed: [testbed-manager] 2026-02-17 02:54:32.129436 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:54:32.129440 | orchestrator | 2026-02-17 02:54:32.129444 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-17 02:54:32.129447 | orchestrator | Tuesday 17 February 2026 02:54:24 +0000 (0:00:00.800) 0:05:16.301 ****** 2026-02-17 02:54:32.129468 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:54:32.129472 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:54:32.129475 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:54:32.129479 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:54:32.129483 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:54:32.129486 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:54:32.129490 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:54:32.129494 | orchestrator | 2026-02-17 02:54:32.129497 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-17 02:54:32.129501 | orchestrator | Tuesday 17 February 2026 02:54:24 +0000 (0:00:00.307) 0:05:16.608 ****** 2026-02-17 02:54:32.129505 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:54:32.129509 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:54:32.129512 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:54:32.129527 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:54:32.129531 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:54:32.129534 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:54:32.129538 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:54:32.129542 | orchestrator | 2026-02-17 02:54:32.129545 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-17 02:54:32.129549 | orchestrator | Tuesday 17 February 2026 02:54:24 +0000 (0:00:00.451) 0:05:17.059 ****** 2026-02-17 02:54:32.129553 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:32.129556 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:32.129560 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:32.129564 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:32.129567 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:32.129572 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:32.129578 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:32.129584 | orchestrator | 2026-02-17 02:54:32.129590 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-17 02:54:32.129596 | orchestrator | Tuesday 17 February 2026 02:54:25 +0000 (0:00:00.326) 0:05:17.386 ****** 2026-02-17 02:54:32.129602 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:54:32.129608 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:54:32.129614 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:54:32.129619 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:54:32.129625 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:54:32.129632 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:54:32.129637 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:54:32.129643 | orchestrator | 2026-02-17 02:54:32.129648 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-17 02:54:32.129655 | orchestrator | Tuesday 17 February 2026 02:54:25 +0000 (0:00:00.357) 0:05:17.743 ****** 2026-02-17 02:54:32.129661 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:32.129666 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:32.129672 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:32.129678 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:32.129684 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:32.129690 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:32.129697 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:32.129704 | orchestrator | 2026-02-17 02:54:32.129711 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-17 02:54:32.129717 | orchestrator | Tuesday 17 February 2026 02:54:25 +0000 (0:00:00.340) 0:05:18.083 ****** 2026-02-17 02:54:32.129723 | orchestrator | ok: [testbed-manager] =>  2026-02-17 02:54:32.129730 | orchestrator |  docker_version: 5:27.5.1 2026-02-17 02:54:32.129736 | orchestrator | ok: [testbed-node-3] =>  2026-02-17 02:54:32.129742 | orchestrator |  docker_version: 5:27.5.1 2026-02-17 02:54:32.129747 | orchestrator | ok: [testbed-node-4] =>  2026-02-17 02:54:32.129751 | orchestrator |  docker_version: 5:27.5.1 2026-02-17 02:54:32.129755 | orchestrator | ok: [testbed-node-5] =>  2026-02-17 02:54:32.129760 | orchestrator |  docker_version: 5:27.5.1 2026-02-17 02:54:32.129781 | orchestrator | ok: [testbed-node-0] =>  2026-02-17 02:54:32.129785 | orchestrator |  docker_version: 5:27.5.1 2026-02-17 02:54:32.129790 | orchestrator | ok: [testbed-node-1] =>  2026-02-17 02:54:32.129794 | orchestrator |  docker_version: 5:27.5.1 2026-02-17 02:54:32.129798 | orchestrator | ok: [testbed-node-2] =>  2026-02-17 02:54:32.129802 | orchestrator |  docker_version: 5:27.5.1 2026-02-17 02:54:32.129806 | orchestrator | 2026-02-17 02:54:32.129811 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-17 02:54:32.129815 | orchestrator | Tuesday 17 February 2026 02:54:26 +0000 (0:00:00.305) 0:05:18.388 ****** 2026-02-17 02:54:32.129819 | orchestrator | ok: [testbed-manager] =>  2026-02-17 02:54:32.129823 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-17 02:54:32.129828 | orchestrator | ok: [testbed-node-3] =>  2026-02-17 02:54:32.129832 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-17 02:54:32.129836 | orchestrator | ok: [testbed-node-4] =>  2026-02-17 02:54:32.129841 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-17 02:54:32.129845 | orchestrator | ok: [testbed-node-5] =>  2026-02-17 02:54:32.129849 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-17 02:54:32.129853 | orchestrator | ok: [testbed-node-0] =>  2026-02-17 02:54:32.129857 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-17 02:54:32.129862 | orchestrator | ok: [testbed-node-1] =>  2026-02-17 02:54:32.129866 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-17 02:54:32.129870 | orchestrator | ok: [testbed-node-2] =>  2026-02-17 02:54:32.129874 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-17 02:54:32.129879 | orchestrator | 2026-02-17 02:54:32.129883 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-17 02:54:32.129888 | orchestrator | Tuesday 17 February 2026 02:54:26 +0000 (0:00:00.347) 0:05:18.736 ****** 2026-02-17 02:54:32.129892 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:54:32.129896 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:54:32.129901 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:54:32.129905 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:54:32.129909 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:54:32.129914 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:54:32.129918 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:54:32.129922 | orchestrator | 2026-02-17 02:54:32.129926 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-17 02:54:32.129931 | orchestrator | Tuesday 17 February 2026 02:54:26 +0000 (0:00:00.324) 0:05:19.060 ****** 2026-02-17 02:54:32.129935 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:54:32.129939 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:54:32.129944 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:54:32.129948 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:54:32.129953 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:54:32.129957 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:54:32.129961 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:54:32.129965 | orchestrator | 2026-02-17 02:54:32.129968 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-17 02:54:32.129972 | orchestrator | Tuesday 17 February 2026 02:54:27 +0000 (0:00:00.328) 0:05:19.389 ****** 2026-02-17 02:54:32.129977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:54:32.129982 | orchestrator | 2026-02-17 02:54:32.129990 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-17 02:54:32.129994 | orchestrator | Tuesday 17 February 2026 02:54:27 +0000 (0:00:00.485) 0:05:19.874 ****** 2026-02-17 02:54:32.129997 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:32.130001 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:32.130005 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:32.130008 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:32.130047 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:32.130056 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:32.130060 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:32.130063 | orchestrator | 2026-02-17 02:54:32.130067 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-17 02:54:32.130071 | orchestrator | Tuesday 17 February 2026 02:54:28 +0000 (0:00:01.086) 0:05:20.961 ****** 2026-02-17 02:54:32.130075 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:54:32.130078 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:54:32.130082 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:54:32.130089 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:54:32.130095 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:54:32.130102 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:54:32.130109 | orchestrator | ok: [testbed-manager] 2026-02-17 02:54:32.130116 | orchestrator | 2026-02-17 02:54:32.130123 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-17 02:54:32.130131 | orchestrator | Tuesday 17 February 2026 02:54:31 +0000 (0:00:02.931) 0:05:23.893 ****** 2026-02-17 02:54:32.130137 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-17 02:54:32.130144 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-17 02:54:32.130151 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-17 02:54:32.130157 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-17 02:54:32.130164 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-17 02:54:32.130171 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:54:32.130179 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-17 02:54:32.130186 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-17 02:54:32.130190 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-17 02:54:32.130194 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-17 02:54:32.130198 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:54:32.130201 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-17 02:54:32.130205 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-17 02:54:32.130209 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:54:32.130213 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-17 02:54:32.130216 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-17 02:54:32.130225 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-17 02:55:32.004021 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-17 02:55:32.004104 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:55:32.004112 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-17 02:55:32.004116 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-17 02:55:32.004121 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-17 02:55:32.004125 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:55:32.004129 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:55:32.004133 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-17 02:55:32.004137 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-17 02:55:32.004141 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-17 02:55:32.004145 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:55:32.004149 | orchestrator | 2026-02-17 02:55:32.004154 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-17 02:55:32.004160 | orchestrator | Tuesday 17 February 2026 02:54:32 +0000 (0:00:00.704) 0:05:24.597 ****** 2026-02-17 02:55:32.004164 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004167 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004171 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004175 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004179 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004182 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004201 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004205 | orchestrator | 2026-02-17 02:55:32.004209 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-17 02:55:32.004213 | orchestrator | Tuesday 17 February 2026 02:54:39 +0000 (0:00:06.821) 0:05:31.419 ****** 2026-02-17 02:55:32.004217 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004220 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004224 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004228 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004231 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004235 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004239 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004243 | orchestrator | 2026-02-17 02:55:32.004246 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-17 02:55:32.004250 | orchestrator | Tuesday 17 February 2026 02:54:40 +0000 (0:00:01.126) 0:05:32.546 ****** 2026-02-17 02:55:32.004254 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004257 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004261 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004265 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004268 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004272 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004276 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004280 | orchestrator | 2026-02-17 02:55:32.004283 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-17 02:55:32.004287 | orchestrator | Tuesday 17 February 2026 02:54:48 +0000 (0:00:08.037) 0:05:40.583 ****** 2026-02-17 02:55:32.004291 | orchestrator | changed: [testbed-manager] 2026-02-17 02:55:32.004295 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004298 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004302 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004306 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004310 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004313 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004317 | orchestrator | 2026-02-17 02:55:32.004321 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-17 02:55:32.004325 | orchestrator | Tuesday 17 February 2026 02:54:51 +0000 (0:00:03.450) 0:05:44.033 ****** 2026-02-17 02:55:32.004329 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004333 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004337 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004340 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004382 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004386 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004390 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004394 | orchestrator | 2026-02-17 02:55:32.004398 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-17 02:55:32.004402 | orchestrator | Tuesday 17 February 2026 02:54:53 +0000 (0:00:01.306) 0:05:45.340 ****** 2026-02-17 02:55:32.004405 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004409 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004413 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004417 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004421 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004424 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004428 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004432 | orchestrator | 2026-02-17 02:55:32.004436 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-17 02:55:32.004440 | orchestrator | Tuesday 17 February 2026 02:54:54 +0000 (0:00:01.689) 0:05:47.029 ****** 2026-02-17 02:55:32.004444 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:55:32.004447 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:55:32.004451 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:55:32.004455 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:55:32.004462 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:55:32.004466 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:55:32.004470 | orchestrator | changed: [testbed-manager] 2026-02-17 02:55:32.004473 | orchestrator | 2026-02-17 02:55:32.004477 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-17 02:55:32.004481 | orchestrator | Tuesday 17 February 2026 02:54:55 +0000 (0:00:00.694) 0:05:47.724 ****** 2026-02-17 02:55:32.004485 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004488 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004492 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004496 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004499 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004503 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004507 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004510 | orchestrator | 2026-02-17 02:55:32.004514 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-17 02:55:32.004528 | orchestrator | Tuesday 17 February 2026 02:55:04 +0000 (0:00:08.792) 0:05:56.516 ****** 2026-02-17 02:55:32.004532 | orchestrator | changed: [testbed-manager] 2026-02-17 02:55:32.004536 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004540 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004543 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004548 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004554 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004559 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004565 | orchestrator | 2026-02-17 02:55:32.004572 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-17 02:55:32.004578 | orchestrator | Tuesday 17 February 2026 02:55:05 +0000 (0:00:01.084) 0:05:57.600 ****** 2026-02-17 02:55:32.004585 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004592 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004597 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004604 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004611 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004618 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004625 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004632 | orchestrator | 2026-02-17 02:55:32.004639 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-17 02:55:32.004647 | orchestrator | Tuesday 17 February 2026 02:55:14 +0000 (0:00:09.063) 0:06:06.664 ****** 2026-02-17 02:55:32.004654 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004661 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004668 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004674 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004681 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004689 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004696 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004702 | orchestrator | 2026-02-17 02:55:32.004709 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-17 02:55:32.004715 | orchestrator | Tuesday 17 February 2026 02:55:25 +0000 (0:00:11.032) 0:06:17.696 ****** 2026-02-17 02:55:32.004721 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-17 02:55:32.004728 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-17 02:55:32.004734 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-17 02:55:32.004740 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-17 02:55:32.004746 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-17 02:55:32.004753 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-17 02:55:32.004759 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-17 02:55:32.004766 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-17 02:55:32.004773 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-17 02:55:32.004784 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-17 02:55:32.004788 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-17 02:55:32.004825 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-17 02:55:32.004830 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-17 02:55:32.004835 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-17 02:55:32.004839 | orchestrator | 2026-02-17 02:55:32.004844 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-17 02:55:32.004849 | orchestrator | Tuesday 17 February 2026 02:55:26 +0000 (0:00:01.207) 0:06:18.904 ****** 2026-02-17 02:55:32.004855 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:55:32.004860 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:55:32.004864 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:55:32.004869 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:55:32.004873 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:55:32.004877 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:55:32.004882 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:55:32.004886 | orchestrator | 2026-02-17 02:55:32.004890 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-17 02:55:32.004895 | orchestrator | Tuesday 17 February 2026 02:55:27 +0000 (0:00:00.625) 0:06:19.529 ****** 2026-02-17 02:55:32.004899 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:32.004903 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:32.004908 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:32.004912 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:32.004916 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:32.004921 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:32.004925 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:32.004930 | orchestrator | 2026-02-17 02:55:32.004934 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-17 02:55:32.004940 | orchestrator | Tuesday 17 February 2026 02:55:30 +0000 (0:00:03.646) 0:06:23.176 ****** 2026-02-17 02:55:32.004944 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:55:32.004948 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:55:32.004952 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:55:32.004955 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:55:32.004959 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:55:32.004963 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:55:32.004966 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:55:32.004970 | orchestrator | 2026-02-17 02:55:32.004975 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-17 02:55:32.004979 | orchestrator | Tuesday 17 February 2026 02:55:31 +0000 (0:00:00.524) 0:06:23.701 ****** 2026-02-17 02:55:32.004983 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-17 02:55:32.004987 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-17 02:55:32.004991 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:55:32.004994 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-17 02:55:32.004998 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-17 02:55:32.005002 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:55:32.005006 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-17 02:55:32.005009 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-17 02:55:32.005013 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:55:32.005022 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-17 02:55:51.886965 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-17 02:55:51.887114 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:55:51.887135 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-17 02:55:51.887147 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-17 02:55:51.887159 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:55:51.887196 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-17 02:55:51.887209 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-17 02:55:51.887220 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:55:51.887230 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-17 02:55:51.887240 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-17 02:55:51.887251 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:55:51.887262 | orchestrator | 2026-02-17 02:55:51.887276 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-17 02:55:51.887287 | orchestrator | Tuesday 17 February 2026 02:55:32 +0000 (0:00:00.832) 0:06:24.533 ****** 2026-02-17 02:55:51.887298 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:55:51.887309 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:55:51.887320 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:55:51.887330 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:55:51.887341 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:55:51.887351 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:55:51.887414 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:55:51.887425 | orchestrator | 2026-02-17 02:55:51.887437 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-17 02:55:51.887448 | orchestrator | Tuesday 17 February 2026 02:55:32 +0000 (0:00:00.623) 0:06:25.157 ****** 2026-02-17 02:55:51.887459 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:55:51.887469 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:55:51.887480 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:55:51.887493 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:55:51.887505 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:55:51.887517 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:55:51.887530 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:55:51.887542 | orchestrator | 2026-02-17 02:55:51.887554 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-17 02:55:51.887566 | orchestrator | Tuesday 17 February 2026 02:55:33 +0000 (0:00:00.547) 0:06:25.704 ****** 2026-02-17 02:55:51.887578 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:55:51.887590 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:55:51.887602 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:55:51.887614 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:55:51.887626 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:55:51.887638 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:55:51.887650 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:55:51.887662 | orchestrator | 2026-02-17 02:55:51.887675 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-17 02:55:51.887687 | orchestrator | Tuesday 17 February 2026 02:55:34 +0000 (0:00:00.559) 0:06:26.263 ****** 2026-02-17 02:55:51.887700 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.887713 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:55:51.887725 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:55:51.887736 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:55:51.887750 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:55:51.887762 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:55:51.887774 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:55:51.887786 | orchestrator | 2026-02-17 02:55:51.887798 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-17 02:55:51.887810 | orchestrator | Tuesday 17 February 2026 02:55:35 +0000 (0:00:01.851) 0:06:28.115 ****** 2026-02-17 02:55:51.887824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:55:51.887839 | orchestrator | 2026-02-17 02:55:51.887851 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-17 02:55:51.887862 | orchestrator | Tuesday 17 February 2026 02:55:36 +0000 (0:00:00.935) 0:06:29.050 ****** 2026-02-17 02:55:51.887887 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.887898 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:51.887909 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:51.887920 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:51.887931 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:51.887941 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:51.887952 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:51.887963 | orchestrator | 2026-02-17 02:55:51.887973 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-17 02:55:51.887984 | orchestrator | Tuesday 17 February 2026 02:55:37 +0000 (0:00:00.864) 0:06:29.915 ****** 2026-02-17 02:55:51.887995 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.888006 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:51.888016 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:51.888027 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:51.888037 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:51.888048 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:51.888058 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:51.888069 | orchestrator | 2026-02-17 02:55:51.888080 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-17 02:55:51.888091 | orchestrator | Tuesday 17 February 2026 02:55:38 +0000 (0:00:00.910) 0:06:30.826 ****** 2026-02-17 02:55:51.888101 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.888112 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:51.888123 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:51.888133 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:51.888144 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:51.888154 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:51.888165 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:51.888175 | orchestrator | 2026-02-17 02:55:51.888186 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-17 02:55:51.888220 | orchestrator | Tuesday 17 February 2026 02:55:40 +0000 (0:00:01.568) 0:06:32.395 ****** 2026-02-17 02:55:51.888241 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:55:51.888269 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:55:51.888291 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:55:51.888308 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:55:51.888326 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:55:51.888343 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:55:51.888360 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:55:51.888407 | orchestrator | 2026-02-17 02:55:51.888425 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-17 02:55:51.888443 | orchestrator | Tuesday 17 February 2026 02:55:41 +0000 (0:00:01.418) 0:06:33.813 ****** 2026-02-17 02:55:51.888461 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.888479 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:51.888498 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:51.888517 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:51.888535 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:51.888554 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:51.888565 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:51.888576 | orchestrator | 2026-02-17 02:55:51.888589 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-17 02:55:51.888607 | orchestrator | Tuesday 17 February 2026 02:55:42 +0000 (0:00:01.367) 0:06:35.180 ****** 2026-02-17 02:55:51.888623 | orchestrator | changed: [testbed-manager] 2026-02-17 02:55:51.888639 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:55:51.888656 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:55:51.888675 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:55:51.888695 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:55:51.888713 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:55:51.888731 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:55:51.888747 | orchestrator | 2026-02-17 02:55:51.888770 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-17 02:55:51.888780 | orchestrator | Tuesday 17 February 2026 02:55:44 +0000 (0:00:01.390) 0:06:36.571 ****** 2026-02-17 02:55:51.888791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:55:51.888803 | orchestrator | 2026-02-17 02:55:51.888813 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-17 02:55:51.888824 | orchestrator | Tuesday 17 February 2026 02:55:45 +0000 (0:00:01.144) 0:06:37.715 ****** 2026-02-17 02:55:51.888835 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:55:51.888845 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:55:51.888856 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:55:51.888867 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:55:51.888877 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.888887 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:55:51.888898 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:55:51.888908 | orchestrator | 2026-02-17 02:55:51.888919 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-17 02:55:51.888930 | orchestrator | Tuesday 17 February 2026 02:55:46 +0000 (0:00:01.336) 0:06:39.052 ****** 2026-02-17 02:55:51.888940 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.888950 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:55:51.888961 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:55:51.888971 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:55:51.888982 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:55:51.889007 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:55:51.889018 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:55:51.889029 | orchestrator | 2026-02-17 02:55:51.889040 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-17 02:55:51.889050 | orchestrator | Tuesday 17 February 2026 02:55:48 +0000 (0:00:01.188) 0:06:40.240 ****** 2026-02-17 02:55:51.889061 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.889072 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:55:51.889082 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:55:51.889093 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:55:51.889103 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:55:51.889113 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:55:51.889124 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:55:51.889134 | orchestrator | 2026-02-17 02:55:51.889144 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-17 02:55:51.889155 | orchestrator | Tuesday 17 February 2026 02:55:49 +0000 (0:00:01.200) 0:06:41.441 ****** 2026-02-17 02:55:51.889166 | orchestrator | ok: [testbed-manager] 2026-02-17 02:55:51.889176 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:55:51.889186 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:55:51.889196 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:55:51.889207 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:55:51.889217 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:55:51.889228 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:55:51.889238 | orchestrator | 2026-02-17 02:55:51.889249 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-17 02:55:51.889259 | orchestrator | Tuesday 17 February 2026 02:55:50 +0000 (0:00:01.376) 0:06:42.817 ****** 2026-02-17 02:55:51.889270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:55:51.889284 | orchestrator | 2026-02-17 02:55:51.889310 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-17 02:55:51.889332 | orchestrator | Tuesday 17 February 2026 02:55:51 +0000 (0:00:00.942) 0:06:43.760 ****** 2026-02-17 02:55:51.889349 | orchestrator | 2026-02-17 02:55:51.889395 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-17 02:55:51.889428 | orchestrator | Tuesday 17 February 2026 02:55:51 +0000 (0:00:00.042) 0:06:43.802 ****** 2026-02-17 02:55:51.889448 | orchestrator | 2026-02-17 02:55:51.889470 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-17 02:55:51.889490 | orchestrator | Tuesday 17 February 2026 02:55:51 +0000 (0:00:00.040) 0:06:43.843 ****** 2026-02-17 02:55:51.889512 | orchestrator | 2026-02-17 02:55:51.889524 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-17 02:55:51.889549 | orchestrator | Tuesday 17 February 2026 02:55:51 +0000 (0:00:00.055) 0:06:43.899 ****** 2026-02-17 02:56:18.513636 | orchestrator | 2026-02-17 02:56:18.513767 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-17 02:56:18.513804 | orchestrator | Tuesday 17 February 2026 02:55:51 +0000 (0:00:00.042) 0:06:43.941 ****** 2026-02-17 02:56:18.513829 | orchestrator | 2026-02-17 02:56:18.513844 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-17 02:56:18.513857 | orchestrator | Tuesday 17 February 2026 02:55:51 +0000 (0:00:00.044) 0:06:43.985 ****** 2026-02-17 02:56:18.513869 | orchestrator | 2026-02-17 02:56:18.513881 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-17 02:56:18.513895 | orchestrator | Tuesday 17 February 2026 02:55:51 +0000 (0:00:00.055) 0:06:44.041 ****** 2026-02-17 02:56:18.513909 | orchestrator | 2026-02-17 02:56:18.513923 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-17 02:56:18.513936 | orchestrator | Tuesday 17 February 2026 02:55:51 +0000 (0:00:00.046) 0:06:44.088 ****** 2026-02-17 02:56:18.513952 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:18.513967 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:18.513982 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:18.513996 | orchestrator | 2026-02-17 02:56:18.514011 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-17 02:56:18.514107 | orchestrator | Tuesday 17 February 2026 02:55:52 +0000 (0:00:01.099) 0:06:45.188 ****** 2026-02-17 02:56:18.514123 | orchestrator | changed: [testbed-manager] 2026-02-17 02:56:18.514140 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:18.514156 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:18.514171 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:18.514190 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:18.514209 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:18.514229 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:18.514247 | orchestrator | 2026-02-17 02:56:18.514267 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-17 02:56:18.514284 | orchestrator | Tuesday 17 February 2026 02:55:54 +0000 (0:00:01.523) 0:06:46.712 ****** 2026-02-17 02:56:18.514310 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:18.514336 | orchestrator | changed: [testbed-manager] 2026-02-17 02:56:18.514365 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:18.514418 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:18.514442 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:18.514458 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:18.514477 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:18.514495 | orchestrator | 2026-02-17 02:56:18.514514 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-17 02:56:18.514533 | orchestrator | Tuesday 17 February 2026 02:55:55 +0000 (0:00:01.231) 0:06:47.944 ****** 2026-02-17 02:56:18.514560 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:56:18.514576 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:18.514591 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:18.514608 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:18.514623 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:18.514639 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:18.514652 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:18.514666 | orchestrator | 2026-02-17 02:56:18.514681 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-17 02:56:18.514696 | orchestrator | Tuesday 17 February 2026 02:55:58 +0000 (0:00:02.459) 0:06:50.403 ****** 2026-02-17 02:56:18.514758 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:56:18.514775 | orchestrator | 2026-02-17 02:56:18.514790 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-17 02:56:18.514805 | orchestrator | Tuesday 17 February 2026 02:55:58 +0000 (0:00:00.129) 0:06:50.533 ****** 2026-02-17 02:56:18.514815 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:18.514824 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:18.514832 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:18.514840 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:18.514849 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:18.514857 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:18.514865 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:18.514874 | orchestrator | 2026-02-17 02:56:18.514882 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-17 02:56:18.514892 | orchestrator | Tuesday 17 February 2026 02:55:59 +0000 (0:00:01.083) 0:06:51.617 ****** 2026-02-17 02:56:18.514900 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:56:18.514909 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:56:18.514917 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:56:18.514926 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:56:18.514934 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:56:18.514942 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:56:18.514951 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:56:18.514959 | orchestrator | 2026-02-17 02:56:18.514968 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-17 02:56:18.514977 | orchestrator | Tuesday 17 February 2026 02:56:00 +0000 (0:00:00.650) 0:06:52.267 ****** 2026-02-17 02:56:18.514986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:56:18.514998 | orchestrator | 2026-02-17 02:56:18.515006 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-17 02:56:18.515015 | orchestrator | Tuesday 17 February 2026 02:56:01 +0000 (0:00:01.233) 0:06:53.501 ****** 2026-02-17 02:56:18.515023 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:18.515032 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:18.515040 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:18.515049 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:18.515057 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:18.515066 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:18.515075 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:18.515083 | orchestrator | 2026-02-17 02:56:18.515092 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-17 02:56:18.515100 | orchestrator | Tuesday 17 February 2026 02:56:02 +0000 (0:00:00.867) 0:06:54.368 ****** 2026-02-17 02:56:18.515109 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-17 02:56:18.515139 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-17 02:56:18.515149 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-17 02:56:18.515157 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-17 02:56:18.515166 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-17 02:56:18.515174 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-17 02:56:18.515183 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-17 02:56:18.515191 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-17 02:56:18.515200 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-17 02:56:18.515208 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-17 02:56:18.515216 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-17 02:56:18.515225 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-17 02:56:18.515242 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-17 02:56:18.515251 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-17 02:56:18.515259 | orchestrator | 2026-02-17 02:56:18.515268 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-17 02:56:18.515276 | orchestrator | Tuesday 17 February 2026 02:56:04 +0000 (0:00:02.456) 0:06:56.825 ****** 2026-02-17 02:56:18.515285 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:56:18.515293 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:56:18.515302 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:56:18.515310 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:56:18.515318 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:56:18.515327 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:56:18.515335 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:56:18.515350 | orchestrator | 2026-02-17 02:56:18.515364 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-17 02:56:18.515378 | orchestrator | Tuesday 17 February 2026 02:56:05 +0000 (0:00:00.813) 0:06:57.639 ****** 2026-02-17 02:56:18.515421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:56:18.515438 | orchestrator | 2026-02-17 02:56:18.515453 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-17 02:56:18.515468 | orchestrator | Tuesday 17 February 2026 02:56:06 +0000 (0:00:00.928) 0:06:58.568 ****** 2026-02-17 02:56:18.515479 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:18.515488 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:18.515496 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:18.515505 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:18.515513 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:18.515522 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:18.515531 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:18.515539 | orchestrator | 2026-02-17 02:56:18.515548 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-17 02:56:18.515556 | orchestrator | Tuesday 17 February 2026 02:56:07 +0000 (0:00:00.866) 0:06:59.434 ****** 2026-02-17 02:56:18.515571 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:18.515580 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:18.515588 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:18.515597 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:18.515605 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:18.515614 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:18.515622 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:18.515667 | orchestrator | 2026-02-17 02:56:18.515681 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-17 02:56:18.515695 | orchestrator | Tuesday 17 February 2026 02:56:08 +0000 (0:00:01.116) 0:07:00.551 ****** 2026-02-17 02:56:18.515709 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:56:18.515724 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:56:18.515739 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:56:18.515753 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:56:18.515768 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:56:18.515777 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:56:18.515785 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:56:18.515794 | orchestrator | 2026-02-17 02:56:18.515802 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-17 02:56:18.515811 | orchestrator | Tuesday 17 February 2026 02:56:08 +0000 (0:00:00.533) 0:07:01.085 ****** 2026-02-17 02:56:18.515819 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:18.515828 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:18.515836 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:18.515845 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:18.515853 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:18.515870 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:18.515879 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:18.515887 | orchestrator | 2026-02-17 02:56:18.515896 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-17 02:56:18.515904 | orchestrator | Tuesday 17 February 2026 02:56:10 +0000 (0:00:01.619) 0:07:02.704 ****** 2026-02-17 02:56:18.515913 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:56:18.515921 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:56:18.515930 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:56:18.515938 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:56:18.515947 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:56:18.515955 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:56:18.515964 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:56:18.515972 | orchestrator | 2026-02-17 02:56:18.515980 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-17 02:56:18.515989 | orchestrator | Tuesday 17 February 2026 02:56:11 +0000 (0:00:00.620) 0:07:03.325 ****** 2026-02-17 02:56:18.515998 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:18.516006 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:18.516014 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:18.516023 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:18.516031 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:18.516040 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:18.516057 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:56.062259 | orchestrator | 2026-02-17 02:56:56.062347 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-17 02:56:56.062357 | orchestrator | Tuesday 17 February 2026 02:56:18 +0000 (0:00:07.398) 0:07:10.724 ****** 2026-02-17 02:56:56.062361 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062367 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:56.062372 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:56.062376 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:56.062380 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:56.062384 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:56.062388 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:56.062392 | orchestrator | 2026-02-17 02:56:56.062396 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-17 02:56:56.062400 | orchestrator | Tuesday 17 February 2026 02:56:20 +0000 (0:00:01.647) 0:07:12.372 ****** 2026-02-17 02:56:56.062404 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062408 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:56.062411 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:56.062447 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:56.062453 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:56.062457 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:56.062460 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:56.062464 | orchestrator | 2026-02-17 02:56:56.062468 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-17 02:56:56.062472 | orchestrator | Tuesday 17 February 2026 02:56:21 +0000 (0:00:01.734) 0:07:14.106 ****** 2026-02-17 02:56:56.062475 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062479 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:56.062483 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:56.062487 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:56.062490 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:56.062494 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:56.062498 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:56.062502 | orchestrator | 2026-02-17 02:56:56.062505 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-17 02:56:56.062509 | orchestrator | Tuesday 17 February 2026 02:56:23 +0000 (0:00:01.788) 0:07:15.895 ****** 2026-02-17 02:56:56.062513 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062517 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:56.062521 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:56.062540 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:56.062544 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:56.062548 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:56.062552 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:56.062555 | orchestrator | 2026-02-17 02:56:56.062559 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-17 02:56:56.062563 | orchestrator | Tuesday 17 February 2026 02:56:24 +0000 (0:00:00.995) 0:07:16.891 ****** 2026-02-17 02:56:56.062567 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:56:56.062571 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:56:56.062574 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:56:56.062578 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:56:56.062582 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:56:56.062585 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:56:56.062589 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:56:56.062593 | orchestrator | 2026-02-17 02:56:56.062597 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-17 02:56:56.062601 | orchestrator | Tuesday 17 February 2026 02:56:26 +0000 (0:00:01.388) 0:07:18.280 ****** 2026-02-17 02:56:56.062604 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:56:56.062608 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:56:56.062612 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:56:56.062616 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:56:56.062620 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:56:56.062623 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:56:56.062627 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:56:56.062631 | orchestrator | 2026-02-17 02:56:56.062635 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-17 02:56:56.062638 | orchestrator | Tuesday 17 February 2026 02:56:26 +0000 (0:00:00.579) 0:07:18.860 ****** 2026-02-17 02:56:56.062642 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062658 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:56.062662 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:56.062666 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:56.062670 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:56.062673 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:56.062677 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:56.062681 | orchestrator | 2026-02-17 02:56:56.062684 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-17 02:56:56.062688 | orchestrator | Tuesday 17 February 2026 02:56:27 +0000 (0:00:00.574) 0:07:19.435 ****** 2026-02-17 02:56:56.062692 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062696 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:56.062699 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:56.062703 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:56.062707 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:56.062711 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:56.062714 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:56.062718 | orchestrator | 2026-02-17 02:56:56.062722 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-17 02:56:56.062726 | orchestrator | Tuesday 17 February 2026 02:56:27 +0000 (0:00:00.676) 0:07:20.111 ****** 2026-02-17 02:56:56.062729 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062733 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:56.062737 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:56.062747 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:56.062751 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:56.062754 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:56.062758 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:56.062762 | orchestrator | 2026-02-17 02:56:56.062765 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-17 02:56:56.062775 | orchestrator | Tuesday 17 February 2026 02:56:28 +0000 (0:00:00.985) 0:07:21.097 ****** 2026-02-17 02:56:56.062779 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062783 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:56.062790 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:56.062794 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:56.062797 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:56.062801 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:56.062805 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:56.062808 | orchestrator | 2026-02-17 02:56:56.062823 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-17 02:56:56.062828 | orchestrator | Tuesday 17 February 2026 02:56:34 +0000 (0:00:05.579) 0:07:26.677 ****** 2026-02-17 02:56:56.062832 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:56:56.062837 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:56:56.062842 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:56:56.062846 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:56:56.062851 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:56:56.062855 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:56:56.062859 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:56:56.062864 | orchestrator | 2026-02-17 02:56:56.062868 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-17 02:56:56.062872 | orchestrator | Tuesday 17 February 2026 02:56:35 +0000 (0:00:00.654) 0:07:27.331 ****** 2026-02-17 02:56:56.062878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:56:56.062884 | orchestrator | 2026-02-17 02:56:56.062889 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-17 02:56:56.062893 | orchestrator | Tuesday 17 February 2026 02:56:36 +0000 (0:00:01.382) 0:07:28.713 ****** 2026-02-17 02:56:56.062898 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:56.062902 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:56.062907 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:56.062911 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:56.062915 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:56.062920 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:56.062924 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062928 | orchestrator | 2026-02-17 02:56:56.062933 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-17 02:56:56.062937 | orchestrator | Tuesday 17 February 2026 02:56:38 +0000 (0:00:02.093) 0:07:30.807 ****** 2026-02-17 02:56:56.062942 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062946 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:56.062950 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:56.062955 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:56.062959 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:56.062963 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:56.062968 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:56.062972 | orchestrator | 2026-02-17 02:56:56.062977 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-17 02:56:56.062981 | orchestrator | Tuesday 17 February 2026 02:56:40 +0000 (0:00:02.235) 0:07:33.043 ****** 2026-02-17 02:56:56.062985 | orchestrator | ok: [testbed-manager] 2026-02-17 02:56:56.062989 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:56:56.062994 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:56:56.062998 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:56:56.063002 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:56:56.063007 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:56:56.063011 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:56:56.063016 | orchestrator | 2026-02-17 02:56:56.063021 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-17 02:56:56.063024 | orchestrator | Tuesday 17 February 2026 02:56:41 +0000 (0:00:00.910) 0:07:33.953 ****** 2026-02-17 02:56:56.063031 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-17 02:56:56.063036 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-17 02:56:56.063043 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-17 02:56:56.063047 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-17 02:56:56.063051 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-17 02:56:56.063055 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-17 02:56:56.063058 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-17 02:56:56.063062 | orchestrator | 2026-02-17 02:56:56.063066 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-17 02:56:56.063070 | orchestrator | Tuesday 17 February 2026 02:56:43 +0000 (0:00:02.243) 0:07:36.197 ****** 2026-02-17 02:56:56.063074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:56:56.063077 | orchestrator | 2026-02-17 02:56:56.063081 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-17 02:56:56.063085 | orchestrator | Tuesday 17 February 2026 02:56:45 +0000 (0:00:01.033) 0:07:37.231 ****** 2026-02-17 02:56:56.063089 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:56:56.063092 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:56:56.063096 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:56:56.063100 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:56:56.063104 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:56:56.063107 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:56:56.063111 | orchestrator | changed: [testbed-manager] 2026-02-17 02:56:56.063115 | orchestrator | 2026-02-17 02:56:56.063121 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-17 02:57:33.303731 | orchestrator | Tuesday 17 February 2026 02:56:56 +0000 (0:00:11.038) 0:07:48.269 ****** 2026-02-17 02:57:33.303822 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:57:33.303831 | orchestrator | ok: [testbed-manager] 2026-02-17 02:57:33.303838 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:57:33.303844 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:57:33.303850 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:57:33.303855 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:57:33.303861 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:57:33.303867 | orchestrator | 2026-02-17 02:57:33.303874 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-17 02:57:33.303881 | orchestrator | Tuesday 17 February 2026 02:56:58 +0000 (0:00:02.445) 0:07:50.715 ****** 2026-02-17 02:57:33.303886 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:57:33.303892 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:57:33.303898 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:57:33.303903 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:57:33.303909 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:57:33.303914 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:57:33.303920 | orchestrator | 2026-02-17 02:57:33.303926 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-17 02:57:33.303932 | orchestrator | Tuesday 17 February 2026 02:56:59 +0000 (0:00:01.300) 0:07:52.016 ****** 2026-02-17 02:57:33.303938 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.303944 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.303950 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.303956 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.303961 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.303984 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.303990 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.303996 | orchestrator | 2026-02-17 02:57:33.304002 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-17 02:57:33.304007 | orchestrator | 2026-02-17 02:57:33.304013 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-17 02:57:33.304019 | orchestrator | Tuesday 17 February 2026 02:57:01 +0000 (0:00:01.368) 0:07:53.384 ****** 2026-02-17 02:57:33.304024 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:57:33.304030 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:57:33.304035 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:57:33.304041 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:57:33.304046 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:57:33.304052 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:57:33.304057 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:57:33.304063 | orchestrator | 2026-02-17 02:57:33.304069 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-17 02:57:33.304074 | orchestrator | 2026-02-17 02:57:33.304080 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-17 02:57:33.304086 | orchestrator | Tuesday 17 February 2026 02:57:02 +0000 (0:00:00.983) 0:07:54.367 ****** 2026-02-17 02:57:33.304092 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.304097 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.304103 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.304108 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.304114 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.304120 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.304125 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.304131 | orchestrator | 2026-02-17 02:57:33.304136 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-17 02:57:33.304153 | orchestrator | Tuesday 17 February 2026 02:57:03 +0000 (0:00:01.436) 0:07:55.804 ****** 2026-02-17 02:57:33.304159 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:57:33.304165 | orchestrator | ok: [testbed-manager] 2026-02-17 02:57:33.304170 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:57:33.304176 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:57:33.304182 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:57:33.304187 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:57:33.304193 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:57:33.304198 | orchestrator | 2026-02-17 02:57:33.304204 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-17 02:57:33.304210 | orchestrator | Tuesday 17 February 2026 02:57:05 +0000 (0:00:01.621) 0:07:57.425 ****** 2026-02-17 02:57:33.304215 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:57:33.304221 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:57:33.304227 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:57:33.304232 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:57:33.304238 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:57:33.304244 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:57:33.304249 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:57:33.304255 | orchestrator | 2026-02-17 02:57:33.304260 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-17 02:57:33.304268 | orchestrator | Tuesday 17 February 2026 02:57:05 +0000 (0:00:00.643) 0:07:58.069 ****** 2026-02-17 02:57:33.304279 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:57:33.304293 | orchestrator | 2026-02-17 02:57:33.304307 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-17 02:57:33.304317 | orchestrator | Tuesday 17 February 2026 02:57:07 +0000 (0:00:01.364) 0:07:59.434 ****** 2026-02-17 02:57:33.304327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:57:33.304347 | orchestrator | 2026-02-17 02:57:33.304357 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-17 02:57:33.304366 | orchestrator | Tuesday 17 February 2026 02:57:08 +0000 (0:00:01.014) 0:08:00.448 ****** 2026-02-17 02:57:33.304375 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.304385 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.304395 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.304405 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.304415 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.304426 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.304435 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.304446 | orchestrator | 2026-02-17 02:57:33.304490 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-17 02:57:33.304498 | orchestrator | Tuesday 17 February 2026 02:57:18 +0000 (0:00:10.411) 0:08:10.860 ****** 2026-02-17 02:57:33.304504 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.304511 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.304517 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.304524 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.304530 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.304537 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.304543 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.304550 | orchestrator | 2026-02-17 02:57:33.304556 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-17 02:57:33.304563 | orchestrator | Tuesday 17 February 2026 02:57:19 +0000 (0:00:01.333) 0:08:12.194 ****** 2026-02-17 02:57:33.304569 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.304575 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.304582 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.304588 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.304595 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.304601 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.304607 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.304614 | orchestrator | 2026-02-17 02:57:33.304620 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-17 02:57:33.304627 | orchestrator | Tuesday 17 February 2026 02:57:21 +0000 (0:00:01.441) 0:08:13.635 ****** 2026-02-17 02:57:33.304633 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.304640 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.304646 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.304652 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.304658 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.304664 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.304669 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.304678 | orchestrator | 2026-02-17 02:57:33.304687 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-17 02:57:33.304697 | orchestrator | Tuesday 17 February 2026 02:57:23 +0000 (0:00:02.298) 0:08:15.934 ****** 2026-02-17 02:57:33.304706 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.304716 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.304725 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.304735 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.304744 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.304753 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.304761 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.304771 | orchestrator | 2026-02-17 02:57:33.304780 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-17 02:57:33.304789 | orchestrator | Tuesday 17 February 2026 02:57:26 +0000 (0:00:02.354) 0:08:18.289 ****** 2026-02-17 02:57:33.304798 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.304804 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.304815 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.304821 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.304827 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.304832 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.304838 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.304843 | orchestrator | 2026-02-17 02:57:33.304849 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-17 02:57:33.304855 | orchestrator | 2026-02-17 02:57:33.304866 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-17 02:57:33.304872 | orchestrator | Tuesday 17 February 2026 02:57:27 +0000 (0:00:01.223) 0:08:19.512 ****** 2026-02-17 02:57:33.304878 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:57:33.304884 | orchestrator | 2026-02-17 02:57:33.304890 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-17 02:57:33.304895 | orchestrator | Tuesday 17 February 2026 02:57:28 +0000 (0:00:01.032) 0:08:20.545 ****** 2026-02-17 02:57:33.304901 | orchestrator | ok: [testbed-manager] 2026-02-17 02:57:33.304906 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:57:33.304912 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:57:33.304918 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:57:33.304923 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:57:33.304929 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:57:33.304935 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:57:33.304940 | orchestrator | 2026-02-17 02:57:33.304946 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-17 02:57:33.304952 | orchestrator | Tuesday 17 February 2026 02:57:29 +0000 (0:00:01.241) 0:08:21.787 ****** 2026-02-17 02:57:33.304957 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:33.304963 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:33.304969 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:33.304974 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:33.304980 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:33.304986 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:33.304991 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:33.304997 | orchestrator | 2026-02-17 02:57:33.305003 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-17 02:57:33.305008 | orchestrator | Tuesday 17 February 2026 02:57:30 +0000 (0:00:01.390) 0:08:23.177 ****** 2026-02-17 02:57:33.305014 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 02:57:33.305020 | orchestrator | 2026-02-17 02:57:33.305030 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-17 02:57:33.305039 | orchestrator | Tuesday 17 February 2026 02:57:32 +0000 (0:00:01.378) 0:08:24.556 ****** 2026-02-17 02:57:33.305049 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:57:33.305059 | orchestrator | ok: [testbed-manager] 2026-02-17 02:57:33.305068 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:57:33.305078 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:57:33.305088 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:57:33.305097 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:57:33.305106 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:57:33.305115 | orchestrator | 2026-02-17 02:57:33.305130 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-17 02:57:35.514523 | orchestrator | Tuesday 17 February 2026 02:57:33 +0000 (0:00:00.953) 0:08:25.510 ****** 2026-02-17 02:57:35.514599 | orchestrator | changed: [testbed-manager] 2026-02-17 02:57:35.514620 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:57:35.514627 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:57:35.514633 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:57:35.514638 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:57:35.514652 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:57:35.514658 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:57:35.514749 | orchestrator | 2026-02-17 02:57:35.514759 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:57:35.514766 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-17 02:57:35.514773 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-17 02:57:35.514783 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-17 02:57:35.514792 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-17 02:57:35.514802 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-17 02:57:35.514811 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-17 02:57:35.514820 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-17 02:57:35.514829 | orchestrator | 2026-02-17 02:57:35.514838 | orchestrator | 2026-02-17 02:57:35.514847 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:57:35.514857 | orchestrator | Tuesday 17 February 2026 02:57:34 +0000 (0:00:01.279) 0:08:26.790 ****** 2026-02-17 02:57:35.514867 | orchestrator | =============================================================================== 2026-02-17 02:57:35.514875 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.88s 2026-02-17 02:57:35.514886 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.42s 2026-02-17 02:57:35.514895 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.56s 2026-02-17 02:57:35.514904 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.75s 2026-02-17 02:57:35.514914 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.30s 2026-02-17 02:57:35.514940 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.24s 2026-02-17 02:57:35.514949 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 11.04s 2026-02-17 02:57:35.514958 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.03s 2026-02-17 02:57:35.514967 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.41s 2026-02-17 02:57:35.514976 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.06s 2026-02-17 02:57:35.514985 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.79s 2026-02-17 02:57:35.514995 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.31s 2026-02-17 02:57:35.515005 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.04s 2026-02-17 02:57:35.515016 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.67s 2026-02-17 02:57:35.515025 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.64s 2026-02-17 02:57:35.515035 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.40s 2026-02-17 02:57:35.515044 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.82s 2026-02-17 02:57:35.515054 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.09s 2026-02-17 02:57:35.515062 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.71s 2026-02-17 02:57:35.515072 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.58s 2026-02-17 02:57:36.029813 | orchestrator | + osism apply fail2ban 2026-02-17 02:57:49.697181 | orchestrator | 2026-02-17 02:57:49 | INFO  | Task 16d13eb4-a446-4443-b32c-e22325bda9c6 (fail2ban) was prepared for execution. 2026-02-17 02:57:49.697290 | orchestrator | 2026-02-17 02:57:49 | INFO  | It takes a moment until task 16d13eb4-a446-4443-b32c-e22325bda9c6 (fail2ban) has been started and output is visible here. 2026-02-17 02:58:14.217700 | orchestrator | 2026-02-17 02:58:14.217805 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-17 02:58:14.217818 | orchestrator | 2026-02-17 02:58:14.217827 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-17 02:58:14.217836 | orchestrator | Tuesday 17 February 2026 02:57:55 +0000 (0:00:00.349) 0:00:00.349 ****** 2026-02-17 02:58:14.217845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 02:58:14.217856 | orchestrator | 2026-02-17 02:58:14.217864 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-17 02:58:14.217873 | orchestrator | Tuesday 17 February 2026 02:57:56 +0000 (0:00:01.401) 0:00:01.751 ****** 2026-02-17 02:58:14.217881 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:58:14.217889 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:58:14.217897 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:58:14.217905 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:58:14.217913 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:58:14.217921 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:58:14.217929 | orchestrator | changed: [testbed-manager] 2026-02-17 02:58:14.217937 | orchestrator | 2026-02-17 02:58:14.217945 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-17 02:58:14.217955 | orchestrator | Tuesday 17 February 2026 02:58:08 +0000 (0:00:12.063) 0:00:13.814 ****** 2026-02-17 02:58:14.217967 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:58:14.217980 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:58:14.217993 | orchestrator | changed: [testbed-manager] 2026-02-17 02:58:14.218006 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:58:14.218077 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:58:14.218086 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:58:14.218094 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:58:14.218102 | orchestrator | 2026-02-17 02:58:14.218110 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-17 02:58:14.218118 | orchestrator | Tuesday 17 February 2026 02:58:10 +0000 (0:00:01.487) 0:00:15.302 ****** 2026-02-17 02:58:14.218126 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:58:14.218135 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:58:14.218142 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:14.218150 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:58:14.218158 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:58:14.218166 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:58:14.218174 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:58:14.218182 | orchestrator | 2026-02-17 02:58:14.218190 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-17 02:58:14.218198 | orchestrator | Tuesday 17 February 2026 02:58:12 +0000 (0:00:01.597) 0:00:16.899 ****** 2026-02-17 02:58:14.218206 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:58:14.218214 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:58:14.218222 | orchestrator | changed: [testbed-manager] 2026-02-17 02:58:14.218230 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:58:14.218239 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:58:14.218248 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:58:14.218257 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:58:14.218266 | orchestrator | 2026-02-17 02:58:14.218275 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:58:14.218284 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:58:14.218323 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:58:14.218339 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:58:14.218352 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:58:14.218365 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:58:14.218378 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:58:14.218391 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:58:14.218403 | orchestrator | 2026-02-17 02:58:14.218416 | orchestrator | 2026-02-17 02:58:14.218428 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:58:14.218441 | orchestrator | Tuesday 17 February 2026 02:58:13 +0000 (0:00:01.657) 0:00:18.557 ****** 2026-02-17 02:58:14.218454 | orchestrator | =============================================================================== 2026-02-17 02:58:14.218466 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.06s 2026-02-17 02:58:14.218479 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.66s 2026-02-17 02:58:14.218513 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.60s 2026-02-17 02:58:14.218526 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.49s 2026-02-17 02:58:14.218540 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.40s 2026-02-17 02:58:14.594353 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-17 02:58:14.594469 | orchestrator | + osism apply network 2026-02-17 02:58:26.796178 | orchestrator | 2026-02-17 02:58:26 | INFO  | Task f254dd69-7975-4090-ab09-43a24347fc94 (network) was prepared for execution. 2026-02-17 02:58:26.796299 | orchestrator | 2026-02-17 02:58:26 | INFO  | It takes a moment until task f254dd69-7975-4090-ab09-43a24347fc94 (network) has been started and output is visible here. 2026-02-17 02:58:56.854574 | orchestrator | 2026-02-17 02:58:56.854704 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-17 02:58:56.854728 | orchestrator | 2026-02-17 02:58:56.854745 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-17 02:58:56.854762 | orchestrator | Tuesday 17 February 2026 02:58:31 +0000 (0:00:00.281) 0:00:00.281 ****** 2026-02-17 02:58:56.854778 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:56.854796 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:58:56.854812 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:58:56.854828 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:58:56.854846 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:58:56.854863 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:58:56.854879 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:58:56.854896 | orchestrator | 2026-02-17 02:58:56.854914 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-17 02:58:56.854930 | orchestrator | Tuesday 17 February 2026 02:58:32 +0000 (0:00:00.784) 0:00:01.065 ****** 2026-02-17 02:58:56.854948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 02:58:56.854961 | orchestrator | 2026-02-17 02:58:56.854971 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-17 02:58:56.855005 | orchestrator | Tuesday 17 February 2026 02:58:33 +0000 (0:00:01.276) 0:00:02.342 ****** 2026-02-17 02:58:56.855015 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:58:56.855025 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:58:56.855034 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:58:56.855044 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:58:56.855053 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:58:56.855063 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:58:56.855072 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:56.855083 | orchestrator | 2026-02-17 02:58:56.855096 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-17 02:58:56.855113 | orchestrator | Tuesday 17 February 2026 02:58:35 +0000 (0:00:01.706) 0:00:04.049 ****** 2026-02-17 02:58:56.855131 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:58:56.855149 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:58:56.855165 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:56.855182 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:58:56.855193 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:58:56.855204 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:58:56.855214 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:58:56.855225 | orchestrator | 2026-02-17 02:58:56.855236 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-17 02:58:56.855246 | orchestrator | Tuesday 17 February 2026 02:58:36 +0000 (0:00:01.585) 0:00:05.634 ****** 2026-02-17 02:58:56.855258 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-17 02:58:56.855270 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-17 02:58:56.855281 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-17 02:58:56.855292 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-17 02:58:56.855303 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-17 02:58:56.855314 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-17 02:58:56.855325 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-17 02:58:56.855336 | orchestrator | 2026-02-17 02:58:56.855363 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-17 02:58:56.855379 | orchestrator | Tuesday 17 February 2026 02:58:37 +0000 (0:00:01.063) 0:00:06.697 ****** 2026-02-17 02:58:56.855390 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 02:58:56.855402 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 02:58:56.855413 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 02:58:56.855424 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 02:58:56.855435 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 02:58:56.855445 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 02:58:56.855456 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 02:58:56.855467 | orchestrator | 2026-02-17 02:58:56.855479 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-17 02:58:56.855490 | orchestrator | Tuesday 17 February 2026 02:58:41 +0000 (0:00:03.694) 0:00:10.392 ****** 2026-02-17 02:58:56.855501 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:58:56.855510 | orchestrator | changed: [testbed-manager] 2026-02-17 02:58:56.855615 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:58:56.855626 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:58:56.855636 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:58:56.855645 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:58:56.855654 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:58:56.855664 | orchestrator | 2026-02-17 02:58:56.855673 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-17 02:58:56.855683 | orchestrator | Tuesday 17 February 2026 02:58:43 +0000 (0:00:01.798) 0:00:12.191 ****** 2026-02-17 02:58:56.855692 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 02:58:56.855702 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 02:58:56.855711 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 02:58:56.855720 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 02:58:56.855739 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 02:58:56.855749 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 02:58:56.855758 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 02:58:56.855767 | orchestrator | 2026-02-17 02:58:56.855777 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-17 02:58:56.855786 | orchestrator | Tuesday 17 February 2026 02:58:45 +0000 (0:00:01.848) 0:00:14.039 ****** 2026-02-17 02:58:56.855796 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:56.855805 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:58:56.855815 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:58:56.855824 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:58:56.855834 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:58:56.855843 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:58:56.855852 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:58:56.855862 | orchestrator | 2026-02-17 02:58:56.855871 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-17 02:58:56.855899 | orchestrator | Tuesday 17 February 2026 02:58:46 +0000 (0:00:01.190) 0:00:15.230 ****** 2026-02-17 02:58:56.855909 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:58:56.855919 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:58:56.855928 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:58:56.855937 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:58:56.855947 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:58:56.855956 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:58:56.855965 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:58:56.855975 | orchestrator | 2026-02-17 02:58:56.855984 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-17 02:58:56.855994 | orchestrator | Tuesday 17 February 2026 02:58:47 +0000 (0:00:00.857) 0:00:16.087 ****** 2026-02-17 02:58:56.856003 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:58:56.856013 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:56.856022 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:58:56.856032 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:58:56.856041 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:58:56.856050 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:58:56.856060 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:58:56.856069 | orchestrator | 2026-02-17 02:58:56.856078 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-17 02:58:56.856088 | orchestrator | Tuesday 17 February 2026 02:58:49 +0000 (0:00:02.314) 0:00:18.402 ****** 2026-02-17 02:58:56.856098 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:58:56.856107 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:58:56.856116 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:58:56.856126 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:58:56.856139 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:58:56.856155 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:58:56.856169 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-17 02:58:56.856195 | orchestrator | 2026-02-17 02:58:56.856214 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-17 02:58:56.856229 | orchestrator | Tuesday 17 February 2026 02:58:50 +0000 (0:00:00.951) 0:00:19.353 ****** 2026-02-17 02:58:56.856245 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:56.856260 | orchestrator | changed: [testbed-node-1] 2026-02-17 02:58:56.856275 | orchestrator | changed: [testbed-node-0] 2026-02-17 02:58:56.856290 | orchestrator | changed: [testbed-node-2] 2026-02-17 02:58:56.856307 | orchestrator | changed: [testbed-node-3] 2026-02-17 02:58:56.856323 | orchestrator | changed: [testbed-node-4] 2026-02-17 02:58:56.856339 | orchestrator | changed: [testbed-node-5] 2026-02-17 02:58:56.856355 | orchestrator | 2026-02-17 02:58:56.856369 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-17 02:58:56.856379 | orchestrator | Tuesday 17 February 2026 02:58:52 +0000 (0:00:01.714) 0:00:21.067 ****** 2026-02-17 02:58:56.856389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 02:58:56.856414 | orchestrator | 2026-02-17 02:58:56.856424 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-17 02:58:56.856434 | orchestrator | Tuesday 17 February 2026 02:58:53 +0000 (0:00:01.400) 0:00:22.467 ****** 2026-02-17 02:58:56.856443 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:58:56.856452 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:56.856462 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:58:56.856471 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:58:56.856487 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:58:56.856497 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:58:56.856506 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:58:56.856540 | orchestrator | 2026-02-17 02:58:56.856559 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-17 02:58:56.856569 | orchestrator | Tuesday 17 February 2026 02:58:54 +0000 (0:00:01.167) 0:00:23.635 ****** 2026-02-17 02:58:56.856579 | orchestrator | ok: [testbed-manager] 2026-02-17 02:58:56.856588 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:58:56.856598 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:58:56.856607 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:58:56.856616 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:58:56.856626 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:58:56.856635 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:58:56.856645 | orchestrator | 2026-02-17 02:58:56.856654 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-17 02:58:56.856663 | orchestrator | Tuesday 17 February 2026 02:58:55 +0000 (0:00:00.726) 0:00:24.362 ****** 2026-02-17 02:58:56.856673 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-17 02:58:56.856683 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-17 02:58:56.856693 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-17 02:58:56.856702 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-17 02:58:56.856711 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-17 02:58:56.856721 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-17 02:58:56.856730 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-17 02:58:56.856739 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-17 02:58:56.856749 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-17 02:58:56.856758 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-17 02:58:56.856767 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-17 02:58:56.856777 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-17 02:58:56.856786 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-17 02:58:56.856796 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-17 02:58:56.856805 | orchestrator | 2026-02-17 02:58:56.856825 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-17 02:59:16.188519 | orchestrator | Tuesday 17 February 2026 02:58:56 +0000 (0:00:01.314) 0:00:25.676 ****** 2026-02-17 02:59:16.188722 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:59:16.188742 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:59:16.188755 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:59:16.188767 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:59:16.188779 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:59:16.188792 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:59:16.188803 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:59:16.188816 | orchestrator | 2026-02-17 02:59:16.188859 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-17 02:59:16.188871 | orchestrator | Tuesday 17 February 2026 02:58:57 +0000 (0:00:00.673) 0:00:26.349 ****** 2026-02-17 02:59:16.188885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-5, testbed-node-4, testbed-node-3, testbed-node-2 2026-02-17 02:59:16.188901 | orchestrator | 2026-02-17 02:59:16.188914 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-17 02:59:16.188926 | orchestrator | Tuesday 17 February 2026 02:59:02 +0000 (0:00:04.942) 0:00:31.291 ****** 2026-02-17 02:59:16.188943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.188956 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.188970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.188984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.188997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189029 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189042 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189196 | orchestrator | 2026-02-17 02:59:16.189211 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-17 02:59:16.189226 | orchestrator | Tuesday 17 February 2026 02:59:08 +0000 (0:00:06.516) 0:00:37.808 ****** 2026-02-17 02:59:16.189238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189276 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-17 02:59:16.189362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189376 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:16.189425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:23.103400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-17 02:59:23.103494 | orchestrator | 2026-02-17 02:59:23.103507 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-17 02:59:23.103515 | orchestrator | Tuesday 17 February 2026 02:59:16 +0000 (0:00:07.192) 0:00:45.000 ****** 2026-02-17 02:59:23.103521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 02:59:23.103525 | orchestrator | 2026-02-17 02:59:23.103530 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-17 02:59:23.103534 | orchestrator | Tuesday 17 February 2026 02:59:17 +0000 (0:00:01.680) 0:00:46.681 ****** 2026-02-17 02:59:23.103633 | orchestrator | ok: [testbed-manager] 2026-02-17 02:59:23.103639 | orchestrator | ok: [testbed-node-0] 2026-02-17 02:59:23.103643 | orchestrator | ok: [testbed-node-1] 2026-02-17 02:59:23.103647 | orchestrator | ok: [testbed-node-2] 2026-02-17 02:59:23.103650 | orchestrator | ok: [testbed-node-3] 2026-02-17 02:59:23.103654 | orchestrator | ok: [testbed-node-4] 2026-02-17 02:59:23.103658 | orchestrator | ok: [testbed-node-5] 2026-02-17 02:59:23.103662 | orchestrator | 2026-02-17 02:59:23.103666 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-17 02:59:23.103671 | orchestrator | Tuesday 17 February 2026 02:59:18 +0000 (0:00:01.031) 0:00:47.712 ****** 2026-02-17 02:59:23.103675 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-17 02:59:23.103679 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-17 02:59:23.103683 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-17 02:59:23.103687 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-17 02:59:23.103691 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-17 02:59:23.103695 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-17 02:59:23.103698 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-17 02:59:23.103702 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-17 02:59:23.103706 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:59:23.103711 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-17 02:59:23.103727 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-17 02:59:23.103731 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-17 02:59:23.103735 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-17 02:59:23.103739 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:59:23.103759 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-17 02:59:23.103763 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-17 02:59:23.103767 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-17 02:59:23.103770 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-17 02:59:23.103774 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:59:23.103778 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-17 02:59:23.103782 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-17 02:59:23.103786 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-17 02:59:23.103789 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-17 02:59:23.103793 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:59:23.103797 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-17 02:59:23.103801 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-17 02:59:23.103804 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-17 02:59:23.103808 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-17 02:59:23.103812 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:59:23.103816 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:59:23.103819 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-17 02:59:23.103823 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-17 02:59:23.103827 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-17 02:59:23.103831 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-17 02:59:23.103834 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:59:23.103838 | orchestrator | 2026-02-17 02:59:23.103842 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-17 02:59:23.103856 | orchestrator | Tuesday 17 February 2026 02:59:21 +0000 (0:00:02.291) 0:00:50.004 ****** 2026-02-17 02:59:23.103861 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:59:23.103864 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:59:23.103868 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:59:23.103872 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:59:23.103875 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:59:23.103879 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:59:23.103883 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:59:23.103886 | orchestrator | 2026-02-17 02:59:23.103890 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-17 02:59:23.103894 | orchestrator | Tuesday 17 February 2026 02:59:21 +0000 (0:00:00.663) 0:00:50.668 ****** 2026-02-17 02:59:23.103897 | orchestrator | skipping: [testbed-manager] 2026-02-17 02:59:23.103901 | orchestrator | skipping: [testbed-node-0] 2026-02-17 02:59:23.103905 | orchestrator | skipping: [testbed-node-1] 2026-02-17 02:59:23.103909 | orchestrator | skipping: [testbed-node-2] 2026-02-17 02:59:23.103913 | orchestrator | skipping: [testbed-node-3] 2026-02-17 02:59:23.103916 | orchestrator | skipping: [testbed-node-4] 2026-02-17 02:59:23.103920 | orchestrator | skipping: [testbed-node-5] 2026-02-17 02:59:23.103924 | orchestrator | 2026-02-17 02:59:23.103927 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:59:23.103932 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 02:59:23.103938 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 02:59:23.103946 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 02:59:23.103950 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 02:59:23.103953 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 02:59:23.103957 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 02:59:23.103962 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 02:59:23.103966 | orchestrator | 2026-02-17 02:59:23.103970 | orchestrator | 2026-02-17 02:59:23.103974 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:59:23.103979 | orchestrator | Tuesday 17 February 2026 02:59:22 +0000 (0:00:00.771) 0:00:51.440 ****** 2026-02-17 02:59:23.103986 | orchestrator | =============================================================================== 2026-02-17 02:59:23.103991 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 7.19s 2026-02-17 02:59:23.103995 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.52s 2026-02-17 02:59:23.103999 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.94s 2026-02-17 02:59:23.104003 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.69s 2026-02-17 02:59:23.104007 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.31s 2026-02-17 02:59:23.104012 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.29s 2026-02-17 02:59:23.104016 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.85s 2026-02-17 02:59:23.104020 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.80s 2026-02-17 02:59:23.104025 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.71s 2026-02-17 02:59:23.104029 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.71s 2026-02-17 02:59:23.104033 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.68s 2026-02-17 02:59:23.104037 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.59s 2026-02-17 02:59:23.104041 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.40s 2026-02-17 02:59:23.104045 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.31s 2026-02-17 02:59:23.104049 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.28s 2026-02-17 02:59:23.104054 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2026-02-17 02:59:23.104058 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.17s 2026-02-17 02:59:23.104062 | orchestrator | osism.commons.network : Create required directories --------------------- 1.06s 2026-02-17 02:59:23.104066 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2026-02-17 02:59:23.104070 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2026-02-17 02:59:23.487284 | orchestrator | + osism apply wireguard 2026-02-17 02:59:35.829986 | orchestrator | 2026-02-17 02:59:35 | INFO  | Task 81d2b89e-1364-447e-9729-b60030172567 (wireguard) was prepared for execution. 2026-02-17 02:59:35.830139 | orchestrator | 2026-02-17 02:59:35 | INFO  | It takes a moment until task 81d2b89e-1364-447e-9729-b60030172567 (wireguard) has been started and output is visible here. 2026-02-17 02:59:57.630783 | orchestrator | 2026-02-17 02:59:57.630933 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-17 02:59:57.630952 | orchestrator | 2026-02-17 02:59:57.630964 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-17 02:59:57.630976 | orchestrator | Tuesday 17 February 2026 02:59:40 +0000 (0:00:00.250) 0:00:00.250 ****** 2026-02-17 02:59:57.630988 | orchestrator | ok: [testbed-manager] 2026-02-17 02:59:57.631000 | orchestrator | 2026-02-17 02:59:57.631012 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-17 02:59:57.631022 | orchestrator | Tuesday 17 February 2026 02:59:42 +0000 (0:00:01.677) 0:00:01.927 ****** 2026-02-17 02:59:57.631033 | orchestrator | changed: [testbed-manager] 2026-02-17 02:59:57.631049 | orchestrator | 2026-02-17 02:59:57.631069 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-17 02:59:57.631088 | orchestrator | Tuesday 17 February 2026 02:59:49 +0000 (0:00:07.296) 0:00:09.224 ****** 2026-02-17 02:59:57.631106 | orchestrator | changed: [testbed-manager] 2026-02-17 02:59:57.631125 | orchestrator | 2026-02-17 02:59:57.631144 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-17 02:59:57.631164 | orchestrator | Tuesday 17 February 2026 02:59:50 +0000 (0:00:00.606) 0:00:09.831 ****** 2026-02-17 02:59:57.631178 | orchestrator | changed: [testbed-manager] 2026-02-17 02:59:57.631189 | orchestrator | 2026-02-17 02:59:57.631199 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-17 02:59:57.631210 | orchestrator | Tuesday 17 February 2026 02:59:50 +0000 (0:00:00.498) 0:00:10.329 ****** 2026-02-17 02:59:57.631221 | orchestrator | ok: [testbed-manager] 2026-02-17 02:59:57.631231 | orchestrator | 2026-02-17 02:59:57.631242 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-17 02:59:57.631253 | orchestrator | Tuesday 17 February 2026 02:59:51 +0000 (0:00:00.749) 0:00:11.079 ****** 2026-02-17 02:59:57.631263 | orchestrator | ok: [testbed-manager] 2026-02-17 02:59:57.631274 | orchestrator | 2026-02-17 02:59:57.631284 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-17 02:59:57.631295 | orchestrator | Tuesday 17 February 2026 02:59:51 +0000 (0:00:00.442) 0:00:11.521 ****** 2026-02-17 02:59:57.631305 | orchestrator | ok: [testbed-manager] 2026-02-17 02:59:57.631318 | orchestrator | 2026-02-17 02:59:57.631334 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-17 02:59:57.631355 | orchestrator | Tuesday 17 February 2026 02:59:52 +0000 (0:00:00.455) 0:00:11.976 ****** 2026-02-17 02:59:57.631374 | orchestrator | changed: [testbed-manager] 2026-02-17 02:59:57.631389 | orchestrator | 2026-02-17 02:59:57.631408 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-17 02:59:57.631421 | orchestrator | Tuesday 17 February 2026 02:59:53 +0000 (0:00:01.294) 0:00:13.271 ****** 2026-02-17 02:59:57.631439 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-17 02:59:57.631459 | orchestrator | changed: [testbed-manager] 2026-02-17 02:59:57.631478 | orchestrator | 2026-02-17 02:59:57.631499 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-17 02:59:57.631519 | orchestrator | Tuesday 17 February 2026 02:59:54 +0000 (0:00:01.007) 0:00:14.278 ****** 2026-02-17 02:59:57.631537 | orchestrator | changed: [testbed-manager] 2026-02-17 02:59:57.631549 | orchestrator | 2026-02-17 02:59:57.631559 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-17 02:59:57.631601 | orchestrator | Tuesday 17 February 2026 02:59:56 +0000 (0:00:01.732) 0:00:16.011 ****** 2026-02-17 02:59:57.631612 | orchestrator | changed: [testbed-manager] 2026-02-17 02:59:57.631623 | orchestrator | 2026-02-17 02:59:57.631634 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 02:59:57.631644 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 02:59:57.631656 | orchestrator | 2026-02-17 02:59:57.631667 | orchestrator | 2026-02-17 02:59:57.631678 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 02:59:57.631700 | orchestrator | Tuesday 17 February 2026 02:59:57 +0000 (0:00:01.002) 0:00:17.013 ****** 2026-02-17 02:59:57.631711 | orchestrator | =============================================================================== 2026-02-17 02:59:57.631721 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.30s 2026-02-17 02:59:57.631732 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2026-02-17 02:59:57.631743 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2026-02-17 02:59:57.631753 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.29s 2026-02-17 02:59:57.631764 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2026-02-17 02:59:57.631774 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.00s 2026-02-17 02:59:57.631785 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.75s 2026-02-17 02:59:57.631802 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.61s 2026-02-17 02:59:57.631820 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.50s 2026-02-17 02:59:57.631839 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2026-02-17 02:59:57.631858 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-02-17 02:59:57.999182 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-17 02:59:58.040181 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-17 02:59:58.040281 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-17 02:59:58.115577 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 198 0 --:--:-- --:--:-- --:--:-- 200 2026-02-17 02:59:58.130962 | orchestrator | + osism apply --environment custom workarounds 2026-02-17 03:00:00.206999 | orchestrator | 2026-02-17 03:00:00 | INFO  | Trying to run play workarounds in environment custom 2026-02-17 03:00:10.356527 | orchestrator | 2026-02-17 03:00:10 | INFO  | Task a5c70c47-2856-4f76-96af-17c2cb873537 (workarounds) was prepared for execution. 2026-02-17 03:00:10.356691 | orchestrator | 2026-02-17 03:00:10 | INFO  | It takes a moment until task a5c70c47-2856-4f76-96af-17c2cb873537 (workarounds) has been started and output is visible here. 2026-02-17 03:00:37.670452 | orchestrator | 2026-02-17 03:00:37.670534 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:00:37.670541 | orchestrator | 2026-02-17 03:00:37.670546 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-17 03:00:37.670551 | orchestrator | Tuesday 17 February 2026 03:00:14 +0000 (0:00:00.130) 0:00:00.130 ****** 2026-02-17 03:00:37.670556 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-17 03:00:37.670560 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-17 03:00:37.670564 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-17 03:00:37.670568 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-17 03:00:37.670572 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-17 03:00:37.670576 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-17 03:00:37.670580 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-17 03:00:37.670584 | orchestrator | 2026-02-17 03:00:37.670587 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-17 03:00:37.670609 | orchestrator | 2026-02-17 03:00:37.670615 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-17 03:00:37.670621 | orchestrator | Tuesday 17 February 2026 03:00:15 +0000 (0:00:00.900) 0:00:01.031 ****** 2026-02-17 03:00:37.670627 | orchestrator | ok: [testbed-manager] 2026-02-17 03:00:37.670651 | orchestrator | 2026-02-17 03:00:37.670657 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-17 03:00:37.670663 | orchestrator | 2026-02-17 03:00:37.670669 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-17 03:00:37.670674 | orchestrator | Tuesday 17 February 2026 03:00:18 +0000 (0:00:02.750) 0:00:03.781 ****** 2026-02-17 03:00:37.670680 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:00:37.670686 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:00:37.670691 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:00:37.670697 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:00:37.670703 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:00:37.670709 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:00:37.670715 | orchestrator | 2026-02-17 03:00:37.670721 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-17 03:00:37.670727 | orchestrator | 2026-02-17 03:00:37.670732 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-17 03:00:37.670751 | orchestrator | Tuesday 17 February 2026 03:00:20 +0000 (0:00:01.885) 0:00:05.667 ****** 2026-02-17 03:00:37.670758 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-17 03:00:37.670765 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-17 03:00:37.670772 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-17 03:00:37.670778 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-17 03:00:37.670784 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-17 03:00:37.670790 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-17 03:00:37.670796 | orchestrator | 2026-02-17 03:00:37.670803 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-17 03:00:37.670808 | orchestrator | Tuesday 17 February 2026 03:00:22 +0000 (0:00:01.662) 0:00:07.330 ****** 2026-02-17 03:00:37.670812 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:00:37.670816 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:00:37.670820 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:00:37.670824 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:00:37.670827 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:00:37.670831 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:00:37.670835 | orchestrator | 2026-02-17 03:00:37.670838 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-17 03:00:37.670842 | orchestrator | Tuesday 17 February 2026 03:00:25 +0000 (0:00:03.841) 0:00:11.172 ****** 2026-02-17 03:00:37.670846 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:00:37.670850 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:00:37.670854 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:00:37.670857 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:00:37.670861 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:00:37.670865 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:00:37.670868 | orchestrator | 2026-02-17 03:00:37.670872 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-17 03:00:37.670876 | orchestrator | 2026-02-17 03:00:37.670880 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-17 03:00:37.670883 | orchestrator | Tuesday 17 February 2026 03:00:26 +0000 (0:00:00.787) 0:00:11.959 ****** 2026-02-17 03:00:37.670887 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:00:37.670891 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:00:37.670894 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:00:37.670898 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:00:37.670902 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:00:37.670905 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:00:37.670914 | orchestrator | changed: [testbed-manager] 2026-02-17 03:00:37.670917 | orchestrator | 2026-02-17 03:00:37.670921 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-17 03:00:37.670925 | orchestrator | Tuesday 17 February 2026 03:00:28 +0000 (0:00:01.751) 0:00:13.711 ****** 2026-02-17 03:00:37.670929 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:00:37.670932 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:00:37.670936 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:00:37.670940 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:00:37.670943 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:00:37.670947 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:00:37.670962 | orchestrator | changed: [testbed-manager] 2026-02-17 03:00:37.670966 | orchestrator | 2026-02-17 03:00:37.670970 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-17 03:00:37.670973 | orchestrator | Tuesday 17 February 2026 03:00:30 +0000 (0:00:01.737) 0:00:15.449 ****** 2026-02-17 03:00:37.670977 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:00:37.670981 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:00:37.670984 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:00:37.670988 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:00:37.670992 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:00:37.670995 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:00:37.670999 | orchestrator | ok: [testbed-manager] 2026-02-17 03:00:37.671003 | orchestrator | 2026-02-17 03:00:37.671007 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-17 03:00:37.671010 | orchestrator | Tuesday 17 February 2026 03:00:31 +0000 (0:00:01.734) 0:00:17.183 ****** 2026-02-17 03:00:37.671014 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:00:37.671019 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:00:37.671023 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:00:37.671027 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:00:37.671032 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:00:37.671036 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:00:37.671041 | orchestrator | changed: [testbed-manager] 2026-02-17 03:00:37.671045 | orchestrator | 2026-02-17 03:00:37.671049 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-17 03:00:37.671054 | orchestrator | Tuesday 17 February 2026 03:00:33 +0000 (0:00:01.961) 0:00:19.145 ****** 2026-02-17 03:00:37.671059 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:00:37.671063 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:00:37.671067 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:00:37.671071 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:00:37.671076 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:00:37.671080 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:00:37.671084 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:00:37.671089 | orchestrator | 2026-02-17 03:00:37.671094 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-17 03:00:37.671098 | orchestrator | 2026-02-17 03:00:37.671102 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-17 03:00:37.671107 | orchestrator | Tuesday 17 February 2026 03:00:34 +0000 (0:00:00.697) 0:00:19.842 ****** 2026-02-17 03:00:37.671111 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:00:37.671115 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:00:37.671120 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:00:37.671124 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:00:37.671128 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:00:37.671136 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:00:37.671141 | orchestrator | ok: [testbed-manager] 2026-02-17 03:00:37.671145 | orchestrator | 2026-02-17 03:00:37.671149 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:00:37.671155 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:00:37.671161 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:00:37.671168 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:00:37.671173 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:00:37.671177 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:00:37.671181 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:00:37.671186 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:00:37.671190 | orchestrator | 2026-02-17 03:00:37.671194 | orchestrator | 2026-02-17 03:00:37.671199 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:00:37.671203 | orchestrator | Tuesday 17 February 2026 03:00:37 +0000 (0:00:03.040) 0:00:22.882 ****** 2026-02-17 03:00:37.671208 | orchestrator | =============================================================================== 2026-02-17 03:00:37.671212 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.84s 2026-02-17 03:00:37.671216 | orchestrator | Install python3-docker -------------------------------------------------- 3.04s 2026-02-17 03:00:37.671221 | orchestrator | Apply netplan configuration --------------------------------------------- 2.75s 2026-02-17 03:00:37.671225 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.96s 2026-02-17 03:00:37.671230 | orchestrator | Apply netplan configuration --------------------------------------------- 1.89s 2026-02-17 03:00:37.671234 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.75s 2026-02-17 03:00:37.671238 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.74s 2026-02-17 03:00:37.671243 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.73s 2026-02-17 03:00:37.671247 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.66s 2026-02-17 03:00:37.671252 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.90s 2026-02-17 03:00:37.671258 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.79s 2026-02-17 03:00:37.671268 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.70s 2026-02-17 03:00:38.506777 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-17 03:00:50.669320 | orchestrator | 2026-02-17 03:00:50 | INFO  | Task 113dd8d0-1701-4cea-9929-2f7c9da19811 (reboot) was prepared for execution. 2026-02-17 03:00:50.669438 | orchestrator | 2026-02-17 03:00:50 | INFO  | It takes a moment until task 113dd8d0-1701-4cea-9929-2f7c9da19811 (reboot) has been started and output is visible here. 2026-02-17 03:01:01.557905 | orchestrator | 2026-02-17 03:01:01.558080 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-17 03:01:01.558101 | orchestrator | 2026-02-17 03:01:01.558113 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-17 03:01:01.558125 | orchestrator | Tuesday 17 February 2026 03:00:55 +0000 (0:00:00.225) 0:00:00.225 ****** 2026-02-17 03:01:01.558136 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:01:01.558160 | orchestrator | 2026-02-17 03:01:01.558171 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-17 03:01:01.558182 | orchestrator | Tuesday 17 February 2026 03:00:55 +0000 (0:00:00.111) 0:00:00.337 ****** 2026-02-17 03:01:01.558193 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:01:01.558203 | orchestrator | 2026-02-17 03:01:01.558215 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-17 03:01:01.558252 | orchestrator | Tuesday 17 February 2026 03:00:56 +0000 (0:00:00.962) 0:00:01.300 ****** 2026-02-17 03:01:01.558264 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:01:01.558275 | orchestrator | 2026-02-17 03:01:01.558286 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-17 03:01:01.558296 | orchestrator | 2026-02-17 03:01:01.558307 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-17 03:01:01.558318 | orchestrator | Tuesday 17 February 2026 03:00:56 +0000 (0:00:00.139) 0:00:01.440 ****** 2026-02-17 03:01:01.558329 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:01:01.558339 | orchestrator | 2026-02-17 03:01:01.558350 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-17 03:01:01.558361 | orchestrator | Tuesday 17 February 2026 03:00:56 +0000 (0:00:00.103) 0:00:01.543 ****** 2026-02-17 03:01:01.558371 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:01:01.558382 | orchestrator | 2026-02-17 03:01:01.558393 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-17 03:01:01.558419 | orchestrator | Tuesday 17 February 2026 03:00:57 +0000 (0:00:00.680) 0:00:02.224 ****** 2026-02-17 03:01:01.558433 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:01:01.558447 | orchestrator | 2026-02-17 03:01:01.558467 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-17 03:01:01.558486 | orchestrator | 2026-02-17 03:01:01.558504 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-17 03:01:01.558524 | orchestrator | Tuesday 17 February 2026 03:00:57 +0000 (0:00:00.121) 0:00:02.345 ****** 2026-02-17 03:01:01.558544 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:01:01.558565 | orchestrator | 2026-02-17 03:01:01.558584 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-17 03:01:01.558603 | orchestrator | Tuesday 17 February 2026 03:00:57 +0000 (0:00:00.242) 0:00:02.588 ****** 2026-02-17 03:01:01.558647 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:01:01.558660 | orchestrator | 2026-02-17 03:01:01.558673 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-17 03:01:01.558686 | orchestrator | Tuesday 17 February 2026 03:00:58 +0000 (0:00:00.652) 0:00:03.240 ****** 2026-02-17 03:01:01.558698 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:01:01.558710 | orchestrator | 2026-02-17 03:01:01.558723 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-17 03:01:01.558735 | orchestrator | 2026-02-17 03:01:01.558748 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-17 03:01:01.558760 | orchestrator | Tuesday 17 February 2026 03:00:58 +0000 (0:00:00.121) 0:00:03.362 ****** 2026-02-17 03:01:01.558773 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:01:01.558785 | orchestrator | 2026-02-17 03:01:01.558798 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-17 03:01:01.558808 | orchestrator | Tuesday 17 February 2026 03:00:58 +0000 (0:00:00.117) 0:00:03.479 ****** 2026-02-17 03:01:01.558820 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:01:01.558830 | orchestrator | 2026-02-17 03:01:01.558841 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-17 03:01:01.558852 | orchestrator | Tuesday 17 February 2026 03:00:59 +0000 (0:00:00.678) 0:00:04.158 ****** 2026-02-17 03:01:01.558863 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:01:01.558874 | orchestrator | 2026-02-17 03:01:01.558884 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-17 03:01:01.558895 | orchestrator | 2026-02-17 03:01:01.558906 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-17 03:01:01.558917 | orchestrator | Tuesday 17 February 2026 03:00:59 +0000 (0:00:00.132) 0:00:04.290 ****** 2026-02-17 03:01:01.558927 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:01:01.558938 | orchestrator | 2026-02-17 03:01:01.558949 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-17 03:01:01.558972 | orchestrator | Tuesday 17 February 2026 03:00:59 +0000 (0:00:00.118) 0:00:04.409 ****** 2026-02-17 03:01:01.558983 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:01:01.558994 | orchestrator | 2026-02-17 03:01:01.559005 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-17 03:01:01.559016 | orchestrator | Tuesday 17 February 2026 03:01:00 +0000 (0:00:00.673) 0:00:05.083 ****** 2026-02-17 03:01:01.559026 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:01:01.559037 | orchestrator | 2026-02-17 03:01:01.559048 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-17 03:01:01.559059 | orchestrator | 2026-02-17 03:01:01.559070 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-17 03:01:01.559081 | orchestrator | Tuesday 17 February 2026 03:01:00 +0000 (0:00:00.133) 0:00:05.217 ****** 2026-02-17 03:01:01.559105 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:01:01.559117 | orchestrator | 2026-02-17 03:01:01.559139 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-17 03:01:01.559151 | orchestrator | Tuesday 17 February 2026 03:01:00 +0000 (0:00:00.118) 0:00:05.335 ****** 2026-02-17 03:01:01.559161 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:01:01.559172 | orchestrator | 2026-02-17 03:01:01.559183 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-17 03:01:01.559194 | orchestrator | Tuesday 17 February 2026 03:01:01 +0000 (0:00:00.666) 0:00:06.002 ****** 2026-02-17 03:01:01.559227 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:01:01.559239 | orchestrator | 2026-02-17 03:01:01.559250 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:01:01.559262 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:01:01.559275 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:01:01.559285 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:01:01.559296 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:01:01.559307 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:01:01.559318 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:01:01.559328 | orchestrator | 2026-02-17 03:01:01.559339 | orchestrator | 2026-02-17 03:01:01.559350 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:01:01.559361 | orchestrator | Tuesday 17 February 2026 03:01:01 +0000 (0:00:00.043) 0:00:06.046 ****** 2026-02-17 03:01:01.559378 | orchestrator | =============================================================================== 2026-02-17 03:01:01.559389 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.32s 2026-02-17 03:01:01.559400 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.81s 2026-02-17 03:01:01.559411 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.69s 2026-02-17 03:01:01.984472 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-17 03:01:14.292677 | orchestrator | 2026-02-17 03:01:14 | INFO  | Task 302497cd-1ffa-4f39-9bdf-aaa15251adf5 (wait-for-connection) was prepared for execution. 2026-02-17 03:01:14.292789 | orchestrator | 2026-02-17 03:01:14 | INFO  | It takes a moment until task 302497cd-1ffa-4f39-9bdf-aaa15251adf5 (wait-for-connection) has been started and output is visible here. 2026-02-17 03:01:30.886514 | orchestrator | 2026-02-17 03:01:30.886612 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-17 03:01:30.886625 | orchestrator | 2026-02-17 03:01:30.886705 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-17 03:01:30.886713 | orchestrator | Tuesday 17 February 2026 03:01:18 +0000 (0:00:00.251) 0:00:00.251 ****** 2026-02-17 03:01:30.886721 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:01:30.886729 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:01:30.886737 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:01:30.886744 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:01:30.886751 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:01:30.886759 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:01:30.886766 | orchestrator | 2026-02-17 03:01:30.886773 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:01:30.886781 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:01:30.886799 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:01:30.886806 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:01:30.886814 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:01:30.886821 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:01:30.886828 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:01:30.886836 | orchestrator | 2026-02-17 03:01:30.886843 | orchestrator | 2026-02-17 03:01:30.886851 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:01:30.886858 | orchestrator | Tuesday 17 February 2026 03:01:30 +0000 (0:00:11.524) 0:00:11.776 ****** 2026-02-17 03:01:30.886865 | orchestrator | =============================================================================== 2026-02-17 03:01:30.886873 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-02-17 03:01:31.295752 | orchestrator | + osism apply hddtemp 2026-02-17 03:01:43.504782 | orchestrator | 2026-02-17 03:01:43 | INFO  | Task 6a974721-6884-42ff-acc5-c44fa573cc3f (hddtemp) was prepared for execution. 2026-02-17 03:01:43.504885 | orchestrator | 2026-02-17 03:01:43 | INFO  | It takes a moment until task 6a974721-6884-42ff-acc5-c44fa573cc3f (hddtemp) has been started and output is visible here. 2026-02-17 03:02:11.594326 | orchestrator | 2026-02-17 03:02:11.594427 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-17 03:02:11.594439 | orchestrator | 2026-02-17 03:02:11.594448 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-17 03:02:11.594457 | orchestrator | Tuesday 17 February 2026 03:01:48 +0000 (0:00:00.303) 0:00:00.303 ****** 2026-02-17 03:02:11.594466 | orchestrator | ok: [testbed-manager] 2026-02-17 03:02:11.594475 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:02:11.594484 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:02:11.594492 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:02:11.594500 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:02:11.594509 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:02:11.594517 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:02:11.594525 | orchestrator | 2026-02-17 03:02:11.594534 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-17 03:02:11.594543 | orchestrator | Tuesday 17 February 2026 03:01:49 +0000 (0:00:00.843) 0:00:01.146 ****** 2026-02-17 03:02:11.594552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:02:11.594582 | orchestrator | 2026-02-17 03:02:11.594591 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-17 03:02:11.594599 | orchestrator | Tuesday 17 February 2026 03:01:50 +0000 (0:00:01.314) 0:00:02.461 ****** 2026-02-17 03:02:11.594608 | orchestrator | ok: [testbed-manager] 2026-02-17 03:02:11.594616 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:02:11.594624 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:02:11.594632 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:02:11.594640 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:02:11.594648 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:02:11.594656 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:02:11.594689 | orchestrator | 2026-02-17 03:02:11.594696 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-17 03:02:11.594716 | orchestrator | Tuesday 17 February 2026 03:01:52 +0000 (0:00:01.758) 0:00:04.220 ****** 2026-02-17 03:02:11.594723 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:02:11.594732 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:02:11.594740 | orchestrator | changed: [testbed-manager] 2026-02-17 03:02:11.594747 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:02:11.594755 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:02:11.594762 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:02:11.594770 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:02:11.594777 | orchestrator | 2026-02-17 03:02:11.594785 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-17 03:02:11.594792 | orchestrator | Tuesday 17 February 2026 03:01:53 +0000 (0:00:01.304) 0:00:05.524 ****** 2026-02-17 03:02:11.594800 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:02:11.594807 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:02:11.594815 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:02:11.594822 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:02:11.594829 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:02:11.594836 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:02:11.594844 | orchestrator | ok: [testbed-manager] 2026-02-17 03:02:11.594851 | orchestrator | 2026-02-17 03:02:11.594858 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-17 03:02:11.594866 | orchestrator | Tuesday 17 February 2026 03:01:54 +0000 (0:00:01.355) 0:00:06.879 ****** 2026-02-17 03:02:11.594873 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:02:11.594881 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:02:11.594888 | orchestrator | changed: [testbed-manager] 2026-02-17 03:02:11.594896 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:02:11.594905 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:02:11.594913 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:02:11.594920 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:02:11.594927 | orchestrator | 2026-02-17 03:02:11.594935 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-17 03:02:11.594942 | orchestrator | Tuesday 17 February 2026 03:01:55 +0000 (0:00:01.052) 0:00:07.932 ****** 2026-02-17 03:02:11.594950 | orchestrator | changed: [testbed-manager] 2026-02-17 03:02:11.594957 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:02:11.594965 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:02:11.594974 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:02:11.594981 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:02:11.594990 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:02:11.594998 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:02:11.595006 | orchestrator | 2026-02-17 03:02:11.595013 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-17 03:02:11.595021 | orchestrator | Tuesday 17 February 2026 03:02:07 +0000 (0:00:11.839) 0:00:19.772 ****** 2026-02-17 03:02:11.595029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:02:11.595045 | orchestrator | 2026-02-17 03:02:11.595052 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-17 03:02:11.595059 | orchestrator | Tuesday 17 February 2026 03:02:09 +0000 (0:00:01.346) 0:00:21.118 ****** 2026-02-17 03:02:11.595066 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:02:11.595073 | orchestrator | changed: [testbed-manager] 2026-02-17 03:02:11.595080 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:02:11.595088 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:02:11.595095 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:02:11.595103 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:02:11.595110 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:02:11.595117 | orchestrator | 2026-02-17 03:02:11.595125 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:02:11.595133 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:02:11.595156 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:02:11.595165 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:02:11.595172 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:02:11.595180 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:02:11.595188 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:02:11.595195 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:02:11.595202 | orchestrator | 2026-02-17 03:02:11.595208 | orchestrator | 2026-02-17 03:02:11.595216 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:02:11.595223 | orchestrator | Tuesday 17 February 2026 03:02:11 +0000 (0:00:01.944) 0:00:23.062 ****** 2026-02-17 03:02:11.595231 | orchestrator | =============================================================================== 2026-02-17 03:02:11.595238 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.84s 2026-02-17 03:02:11.595245 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.94s 2026-02-17 03:02:11.595253 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.76s 2026-02-17 03:02:11.595265 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.36s 2026-02-17 03:02:11.595272 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.35s 2026-02-17 03:02:11.595279 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.31s 2026-02-17 03:02:11.595286 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.30s 2026-02-17 03:02:11.595293 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 1.05s 2026-02-17 03:02:11.595300 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.84s 2026-02-17 03:02:12.034757 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-17 03:02:12.083703 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 03:02:12.083783 | orchestrator | + sudo systemctl restart manager.service 2026-02-17 03:02:30.612745 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-17 03:02:30.612841 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-17 03:02:30.612853 | orchestrator | + local max_attempts=60 2026-02-17 03:02:30.612862 | orchestrator | + local name=ceph-ansible 2026-02-17 03:02:30.612870 | orchestrator | + local attempt_num=1 2026-02-17 03:02:30.612878 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:02:30.647197 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:02:30.647281 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:02:30.647292 | orchestrator | + sleep 5 2026-02-17 03:02:35.652034 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:02:35.697045 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:02:35.697143 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:02:35.697157 | orchestrator | + sleep 5 2026-02-17 03:02:40.699794 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:02:40.753359 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:02:40.753486 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:02:40.753510 | orchestrator | + sleep 5 2026-02-17 03:02:45.756951 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:02:45.797613 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:02:45.797725 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:02:45.797739 | orchestrator | + sleep 5 2026-02-17 03:02:50.803902 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:02:50.838675 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:02:50.838774 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:02:50.838781 | orchestrator | + sleep 5 2026-02-17 03:02:55.843138 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:02:55.881106 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:02:55.881284 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:02:55.881302 | orchestrator | + sleep 5 2026-02-17 03:03:00.889019 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:03:00.927227 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:00.927308 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:03:00.927319 | orchestrator | + sleep 5 2026-02-17 03:03:05.932293 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:03:05.977667 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:05.977813 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:03:05.977828 | orchestrator | + sleep 5 2026-02-17 03:03:10.978083 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:03:11.020461 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:11.020535 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:03:11.020543 | orchestrator | + sleep 5 2026-02-17 03:03:16.026978 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:03:16.064968 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:16.065062 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:03:16.065075 | orchestrator | + sleep 5 2026-02-17 03:03:21.071338 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:03:21.113875 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:21.113976 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:03:21.113993 | orchestrator | + sleep 5 2026-02-17 03:03:26.120353 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:03:26.165270 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:26.165385 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:03:26.165412 | orchestrator | + sleep 5 2026-02-17 03:03:31.170911 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:03:31.207444 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:31.207529 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-17 03:03:31.207541 | orchestrator | + sleep 5 2026-02-17 03:03:36.213530 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-17 03:03:36.249457 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:36.249592 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-17 03:03:36.249620 | orchestrator | + local max_attempts=60 2026-02-17 03:03:36.249641 | orchestrator | + local name=kolla-ansible 2026-02-17 03:03:36.249660 | orchestrator | + local attempt_num=1 2026-02-17 03:03:36.249755 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-17 03:03:36.285380 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:36.285496 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-17 03:03:36.285656 | orchestrator | + local max_attempts=60 2026-02-17 03:03:36.285684 | orchestrator | + local name=osism-ansible 2026-02-17 03:03:36.285702 | orchestrator | + local attempt_num=1 2026-02-17 03:03:36.286219 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-17 03:03:36.328678 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-17 03:03:36.328783 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-17 03:03:36.328793 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-17 03:03:36.518240 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-17 03:03:36.687711 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-17 03:03:36.850543 | orchestrator | ARA in osism-ansible already disabled. 2026-02-17 03:03:37.027983 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-17 03:03:37.028227 | orchestrator | + osism apply gather-facts 2026-02-17 03:03:49.386281 | orchestrator | 2026-02-17 03:03:49 | INFO  | Task 2fbc3647-e2dd-4277-a8ab-612b7462644a (gather-facts) was prepared for execution. 2026-02-17 03:03:49.386383 | orchestrator | 2026-02-17 03:03:49 | INFO  | It takes a moment until task 2fbc3647-e2dd-4277-a8ab-612b7462644a (gather-facts) has been started and output is visible here. 2026-02-17 03:04:03.712115 | orchestrator | 2026-02-17 03:04:03.712264 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-17 03:04:03.712295 | orchestrator | 2026-02-17 03:04:03.712315 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-17 03:04:03.712331 | orchestrator | Tuesday 17 February 2026 03:03:53 +0000 (0:00:00.252) 0:00:00.252 ****** 2026-02-17 03:04:03.712343 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:04:03.712355 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:04:03.712366 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:04:03.712377 | orchestrator | ok: [testbed-manager] 2026-02-17 03:04:03.712388 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:04:03.712398 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:04:03.712409 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:04:03.712420 | orchestrator | 2026-02-17 03:04:03.712431 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-17 03:04:03.712442 | orchestrator | 2026-02-17 03:04:03.712453 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-17 03:04:03.712464 | orchestrator | Tuesday 17 February 2026 03:04:02 +0000 (0:00:08.658) 0:00:08.911 ****** 2026-02-17 03:04:03.712475 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:04:03.712487 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:04:03.712498 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:04:03.712527 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:04:03.712538 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:04:03.712549 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:04:03.712560 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:04:03.712571 | orchestrator | 2026-02-17 03:04:03.712582 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:04:03.712593 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:04:03.712605 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:04:03.712616 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:04:03.712627 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:04:03.712639 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:04:03.712650 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:04:03.712691 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:04:03.712705 | orchestrator | 2026-02-17 03:04:03.712719 | orchestrator | 2026-02-17 03:04:03.712732 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:04:03.712787 | orchestrator | Tuesday 17 February 2026 03:04:03 +0000 (0:00:00.631) 0:00:09.542 ****** 2026-02-17 03:04:03.712801 | orchestrator | =============================================================================== 2026-02-17 03:04:03.712822 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.66s 2026-02-17 03:04:03.712854 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-02-17 03:04:04.106059 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-17 03:04:04.122465 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-17 03:04:04.141897 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-17 03:04:04.160044 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-17 03:04:04.176554 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-17 03:04:04.190693 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-17 03:04:04.209205 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-17 03:04:04.230430 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-17 03:04:04.252695 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-17 03:04:04.272207 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-17 03:04:04.293864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-17 03:04:04.313588 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-17 03:04:04.328159 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-17 03:04:04.352147 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-17 03:04:04.371962 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-17 03:04:04.385341 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-17 03:04:04.402714 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-17 03:04:04.423266 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-17 03:04:04.437398 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-17 03:04:04.452238 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-17 03:04:04.464960 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-17 03:04:04.479557 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-17 03:04:04.491158 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-17 03:04:04.510459 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-17 03:04:04.944882 | orchestrator | ok: Runtime: 0:25:35.526119 2026-02-17 03:04:05.136755 | 2026-02-17 03:04:05.136968 | TASK [Deploy services] 2026-02-17 03:04:05.910825 | orchestrator | 2026-02-17 03:04:05.911014 | orchestrator | # DEPLOY SERVICES 2026-02-17 03:04:05.911038 | orchestrator | 2026-02-17 03:04:05.911051 | orchestrator | + set -e 2026-02-17 03:04:05.911063 | orchestrator | + echo 2026-02-17 03:04:05.911076 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-17 03:04:05.911089 | orchestrator | + echo 2026-02-17 03:04:05.911129 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 03:04:05.911149 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 03:04:05.911163 | orchestrator | ++ INTERACTIVE=false 2026-02-17 03:04:05.911174 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 03:04:05.911193 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 03:04:05.911203 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 03:04:05.911216 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 03:04:05.911226 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 03:04:05.911242 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 03:04:05.911252 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 03:04:05.911265 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 03:04:05.911275 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 03:04:05.911289 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 03:04:05.911298 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 03:04:05.911308 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 03:04:05.911319 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 03:04:05.911329 | orchestrator | ++ export ARA=false 2026-02-17 03:04:05.911339 | orchestrator | ++ ARA=false 2026-02-17 03:04:05.911348 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 03:04:05.911358 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 03:04:05.911368 | orchestrator | ++ export TEMPEST=false 2026-02-17 03:04:05.911377 | orchestrator | ++ TEMPEST=false 2026-02-17 03:04:05.911400 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 03:04:05.911410 | orchestrator | ++ IS_ZUUL=true 2026-02-17 03:04:05.911420 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 03:04:05.911430 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 03:04:05.911440 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 03:04:05.911450 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 03:04:05.911460 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 03:04:05.911469 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 03:04:05.911479 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 03:04:05.911489 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 03:04:05.911499 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 03:04:05.911515 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 03:04:05.911525 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-17 03:04:05.922000 | orchestrator | + set -e 2026-02-17 03:04:05.922277 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 03:04:05.922293 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 03:04:05.922305 | orchestrator | ++ INTERACTIVE=false 2026-02-17 03:04:05.922316 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 03:04:05.922327 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 03:04:05.922346 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 03:04:05.922364 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 03:04:05.922382 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 03:04:05.922399 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 03:04:05.922417 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 03:04:05.922433 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 03:04:05.922450 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 03:04:05.922468 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 03:04:05.922487 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 03:04:05.922506 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 03:04:05.922522 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 03:04:05.922534 | orchestrator | ++ export ARA=false 2026-02-17 03:04:05.922554 | orchestrator | ++ ARA=false 2026-02-17 03:04:05.922572 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 03:04:05.922590 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 03:04:05.922607 | orchestrator | ++ export TEMPEST=false 2026-02-17 03:04:05.922644 | orchestrator | ++ TEMPEST=false 2026-02-17 03:04:05.923112 | orchestrator | 2026-02-17 03:04:05.923207 | orchestrator | # PULL IMAGES 2026-02-17 03:04:05.923222 | orchestrator | 2026-02-17 03:04:05.923234 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 03:04:05.923246 | orchestrator | ++ IS_ZUUL=true 2026-02-17 03:04:05.923257 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 03:04:05.923269 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 03:04:05.923281 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 03:04:05.923292 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 03:04:05.923303 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 03:04:05.923314 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 03:04:05.923356 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 03:04:05.923368 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 03:04:05.923413 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 03:04:05.923424 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 03:04:05.923435 | orchestrator | + echo 2026-02-17 03:04:05.923447 | orchestrator | + echo '# PULL IMAGES' 2026-02-17 03:04:05.923458 | orchestrator | + echo 2026-02-17 03:04:05.923500 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-17 03:04:05.987281 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 03:04:05.987393 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-17 03:04:08.066317 | orchestrator | 2026-02-17 03:04:08 | INFO  | Trying to run play pull-images in environment custom 2026-02-17 03:04:18.208361 | orchestrator | 2026-02-17 03:04:18 | INFO  | Task 22c5209e-e405-4824-97d5-608fc8ca0d23 (pull-images) was prepared for execution. 2026-02-17 03:04:18.208453 | orchestrator | 2026-02-17 03:04:18 | INFO  | Task 22c5209e-e405-4824-97d5-608fc8ca0d23 is running in background. No more output. Check ARA for logs. 2026-02-17 03:04:18.607332 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-17 03:04:30.896529 | orchestrator | 2026-02-17 03:04:30 | INFO  | Task 89826ea3-ca5e-4dbe-930a-c417e95d45b2 (cgit) was prepared for execution. 2026-02-17 03:04:30.896685 | orchestrator | 2026-02-17 03:04:30 | INFO  | Task 89826ea3-ca5e-4dbe-930a-c417e95d45b2 is running in background. No more output. Check ARA for logs. 2026-02-17 03:04:43.591940 | orchestrator | 2026-02-17 03:04:43 | INFO  | Task 21794716-8437-4ee8-a87b-fddec2a8bc17 (dotfiles) was prepared for execution. 2026-02-17 03:04:43.592108 | orchestrator | 2026-02-17 03:04:43 | INFO  | Task 21794716-8437-4ee8-a87b-fddec2a8bc17 is running in background. No more output. Check ARA for logs. 2026-02-17 03:04:56.722695 | orchestrator | 2026-02-17 03:04:56 | INFO  | Task 28e966a8-8798-491d-b5a4-f21975379af8 (homer) was prepared for execution. 2026-02-17 03:04:56.722909 | orchestrator | 2026-02-17 03:04:56 | INFO  | Task 28e966a8-8798-491d-b5a4-f21975379af8 is running in background. No more output. Check ARA for logs. 2026-02-17 03:05:09.401262 | orchestrator | 2026-02-17 03:05:09 | INFO  | Task 3bc125ee-b169-4e65-90da-d2c627137ffe (phpmyadmin) was prepared for execution. 2026-02-17 03:05:09.401354 | orchestrator | 2026-02-17 03:05:09 | INFO  | Task 3bc125ee-b169-4e65-90da-d2c627137ffe is running in background. No more output. Check ARA for logs. 2026-02-17 03:05:22.363672 | orchestrator | 2026-02-17 03:05:22 | INFO  | Task e105cc1b-a9da-4bee-b27f-c4ce732310e5 (sosreport) was prepared for execution. 2026-02-17 03:05:22.363783 | orchestrator | 2026-02-17 03:05:22 | INFO  | Task e105cc1b-a9da-4bee-b27f-c4ce732310e5 is running in background. No more output. Check ARA for logs. 2026-02-17 03:05:22.742235 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-17 03:05:22.747647 | orchestrator | + set -e 2026-02-17 03:05:22.747713 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 03:05:22.747722 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 03:05:22.747728 | orchestrator | ++ INTERACTIVE=false 2026-02-17 03:05:22.747735 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 03:05:22.747740 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 03:05:22.747744 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 03:05:22.747749 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 03:05:22.747754 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 03:05:22.747759 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 03:05:22.747763 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 03:05:22.747768 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 03:05:22.747773 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 03:05:22.747778 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 03:05:22.747783 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 03:05:22.747787 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 03:05:22.747807 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 03:05:22.747815 | orchestrator | ++ export ARA=false 2026-02-17 03:05:22.747823 | orchestrator | ++ ARA=false 2026-02-17 03:05:22.747831 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 03:05:22.747861 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 03:05:22.747869 | orchestrator | ++ export TEMPEST=false 2026-02-17 03:05:22.747876 | orchestrator | ++ TEMPEST=false 2026-02-17 03:05:22.747882 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 03:05:22.747889 | orchestrator | ++ IS_ZUUL=true 2026-02-17 03:05:22.747910 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 03:05:22.747923 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 03:05:22.747930 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 03:05:22.747937 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 03:05:22.747943 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 03:05:22.747950 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 03:05:22.747958 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 03:05:22.747965 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 03:05:22.747972 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 03:05:22.747979 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 03:05:22.748864 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-17 03:05:22.842998 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 03:05:22.843073 | orchestrator | + osism apply frr 2026-02-17 03:05:35.323628 | orchestrator | 2026-02-17 03:05:35 | INFO  | Task def9d9b0-d794-4356-93a0-cd2305ceafa9 (frr) was prepared for execution. 2026-02-17 03:05:35.323740 | orchestrator | 2026-02-17 03:05:35 | INFO  | It takes a moment until task def9d9b0-d794-4356-93a0-cd2305ceafa9 (frr) has been started and output is visible here. 2026-02-17 03:06:19.426926 | orchestrator | 2026-02-17 03:06:19.427003 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-17 03:06:19.427011 | orchestrator | 2026-02-17 03:06:19.427016 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-17 03:06:19.427026 | orchestrator | Tuesday 17 February 2026 03:05:45 +0000 (0:00:00.387) 0:00:00.387 ****** 2026-02-17 03:06:19.427031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 03:06:19.427036 | orchestrator | 2026-02-17 03:06:19.427040 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-17 03:06:19.427044 | orchestrator | Tuesday 17 February 2026 03:05:46 +0000 (0:00:00.327) 0:00:00.714 ****** 2026-02-17 03:06:19.427049 | orchestrator | changed: [testbed-manager] 2026-02-17 03:06:19.427053 | orchestrator | 2026-02-17 03:06:19.427057 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-17 03:06:19.427063 | orchestrator | Tuesday 17 February 2026 03:05:50 +0000 (0:00:04.336) 0:00:05.050 ****** 2026-02-17 03:06:19.427066 | orchestrator | changed: [testbed-manager] 2026-02-17 03:06:19.427070 | orchestrator | 2026-02-17 03:06:19.427074 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-17 03:06:19.427078 | orchestrator | Tuesday 17 February 2026 03:06:07 +0000 (0:00:16.890) 0:00:21.941 ****** 2026-02-17 03:06:19.427082 | orchestrator | ok: [testbed-manager] 2026-02-17 03:06:19.427087 | orchestrator | 2026-02-17 03:06:19.427091 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-17 03:06:19.427094 | orchestrator | Tuesday 17 February 2026 03:06:08 +0000 (0:00:01.078) 0:00:23.019 ****** 2026-02-17 03:06:19.427098 | orchestrator | changed: [testbed-manager] 2026-02-17 03:06:19.427102 | orchestrator | 2026-02-17 03:06:19.427106 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-17 03:06:19.427110 | orchestrator | Tuesday 17 February 2026 03:06:09 +0000 (0:00:01.177) 0:00:24.197 ****** 2026-02-17 03:06:19.427114 | orchestrator | ok: [testbed-manager] 2026-02-17 03:06:19.427118 | orchestrator | 2026-02-17 03:06:19.427121 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-17 03:06:19.427126 | orchestrator | Tuesday 17 February 2026 03:06:10 +0000 (0:00:01.337) 0:00:25.534 ****** 2026-02-17 03:06:19.427130 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:06:19.427134 | orchestrator | 2026-02-17 03:06:19.427138 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-17 03:06:19.427141 | orchestrator | Tuesday 17 February 2026 03:06:11 +0000 (0:00:00.154) 0:00:25.689 ****** 2026-02-17 03:06:19.427157 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:06:19.427162 | orchestrator | 2026-02-17 03:06:19.427166 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-17 03:06:19.427169 | orchestrator | Tuesday 17 February 2026 03:06:11 +0000 (0:00:00.193) 0:00:25.883 ****** 2026-02-17 03:06:19.427173 | orchestrator | changed: [testbed-manager] 2026-02-17 03:06:19.427177 | orchestrator | 2026-02-17 03:06:19.427181 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-17 03:06:19.427185 | orchestrator | Tuesday 17 February 2026 03:06:12 +0000 (0:00:01.092) 0:00:26.975 ****** 2026-02-17 03:06:19.427189 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-17 03:06:19.427192 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-17 03:06:19.427198 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-17 03:06:19.427202 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-17 03:06:19.427205 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-17 03:06:19.427209 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-17 03:06:19.427213 | orchestrator | 2026-02-17 03:06:19.427217 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-17 03:06:19.427221 | orchestrator | Tuesday 17 February 2026 03:06:15 +0000 (0:00:02.847) 0:00:29.823 ****** 2026-02-17 03:06:19.427225 | orchestrator | ok: [testbed-manager] 2026-02-17 03:06:19.427229 | orchestrator | 2026-02-17 03:06:19.427232 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-17 03:06:19.427236 | orchestrator | Tuesday 17 February 2026 03:06:17 +0000 (0:00:02.016) 0:00:31.839 ****** 2026-02-17 03:06:19.427240 | orchestrator | changed: [testbed-manager] 2026-02-17 03:06:19.427244 | orchestrator | 2026-02-17 03:06:19.427248 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:06:19.427252 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:06:19.427256 | orchestrator | 2026-02-17 03:06:19.427259 | orchestrator | 2026-02-17 03:06:19.427266 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:06:19.427270 | orchestrator | Tuesday 17 February 2026 03:06:18 +0000 (0:00:01.635) 0:00:33.474 ****** 2026-02-17 03:06:19.427274 | orchestrator | =============================================================================== 2026-02-17 03:06:19.427278 | orchestrator | osism.services.frr : Install frr package ------------------------------- 16.89s 2026-02-17 03:06:19.427282 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 4.34s 2026-02-17 03:06:19.427285 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.85s 2026-02-17 03:06:19.427289 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.02s 2026-02-17 03:06:19.427293 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.64s 2026-02-17 03:06:19.427306 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.34s 2026-02-17 03:06:19.427310 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.18s 2026-02-17 03:06:19.427314 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.09s 2026-02-17 03:06:19.427318 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.08s 2026-02-17 03:06:19.427322 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.33s 2026-02-17 03:06:19.427326 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.19s 2026-02-17 03:06:19.427330 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-02-17 03:06:19.829604 | orchestrator | + osism apply kubernetes 2026-02-17 03:06:22.137141 | orchestrator | 2026-02-17 03:06:22 | INFO  | Task 125caa3b-22b7-451e-a819-84a804ca9fec (kubernetes) was prepared for execution. 2026-02-17 03:06:22.137270 | orchestrator | 2026-02-17 03:06:22 | INFO  | It takes a moment until task 125caa3b-22b7-451e-a819-84a804ca9fec (kubernetes) has been started and output is visible here. 2026-02-17 03:06:49.794418 | orchestrator | 2026-02-17 03:06:49.794526 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-17 03:06:49.794542 | orchestrator | 2026-02-17 03:06:49.794554 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-17 03:06:49.794567 | orchestrator | Tuesday 17 February 2026 03:06:27 +0000 (0:00:00.225) 0:00:00.225 ****** 2026-02-17 03:06:49.794584 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:06:49.794601 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:06:49.794617 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:06:49.794634 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:06:49.794650 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:06:49.794666 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:06:49.794684 | orchestrator | 2026-02-17 03:06:49.794701 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-17 03:06:49.794717 | orchestrator | Tuesday 17 February 2026 03:06:28 +0000 (0:00:01.227) 0:00:01.453 ****** 2026-02-17 03:06:49.794731 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.794741 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.794751 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.794761 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.794771 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.794837 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.794852 | orchestrator | 2026-02-17 03:06:49.794866 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-17 03:06:49.794894 | orchestrator | Tuesday 17 February 2026 03:06:29 +0000 (0:00:00.788) 0:00:02.241 ****** 2026-02-17 03:06:49.794914 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.794932 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.795090 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.795108 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.795124 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.795140 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.795157 | orchestrator | 2026-02-17 03:06:49.795172 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-17 03:06:49.795186 | orchestrator | Tuesday 17 February 2026 03:06:30 +0000 (0:00:00.950) 0:00:03.192 ****** 2026-02-17 03:06:49.795200 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:06:49.795216 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:06:49.795233 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:06:49.795251 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:06:49.795266 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:06:49.795279 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:06:49.795295 | orchestrator | 2026-02-17 03:06:49.795310 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-17 03:06:49.795326 | orchestrator | Tuesday 17 February 2026 03:06:32 +0000 (0:00:02.004) 0:00:05.197 ****** 2026-02-17 03:06:49.795341 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:06:49.795355 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:06:49.795369 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:06:49.795384 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:06:49.795399 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:06:49.795414 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:06:49.795429 | orchestrator | 2026-02-17 03:06:49.795445 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-17 03:06:49.795460 | orchestrator | Tuesday 17 February 2026 03:06:34 +0000 (0:00:01.487) 0:00:06.684 ****** 2026-02-17 03:06:49.795475 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:06:49.795525 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:06:49.795542 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:06:49.795558 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:06:49.795574 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:06:49.795589 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:06:49.795605 | orchestrator | 2026-02-17 03:06:49.795631 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-17 03:06:49.795642 | orchestrator | Tuesday 17 February 2026 03:06:35 +0000 (0:00:01.200) 0:00:07.885 ****** 2026-02-17 03:06:49.795652 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.795662 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.795671 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.795681 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.795690 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.795700 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.795709 | orchestrator | 2026-02-17 03:06:49.795720 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-17 03:06:49.795737 | orchestrator | Tuesday 17 February 2026 03:06:35 +0000 (0:00:00.620) 0:00:08.505 ****** 2026-02-17 03:06:49.795751 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.795997 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.796027 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.796044 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.796061 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.796078 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.796093 | orchestrator | 2026-02-17 03:06:49.796109 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-17 03:06:49.796125 | orchestrator | Tuesday 17 February 2026 03:06:36 +0000 (0:00:00.816) 0:00:09.322 ****** 2026-02-17 03:06:49.796140 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 03:06:49.796156 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 03:06:49.796170 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.796185 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 03:06:49.796199 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 03:06:49.796216 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.796232 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 03:06:49.796248 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 03:06:49.796265 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.796282 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 03:06:49.796331 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 03:06:49.796342 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.796352 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 03:06:49.796362 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 03:06:49.796372 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.796382 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 03:06:49.796391 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 03:06:49.796401 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.796411 | orchestrator | 2026-02-17 03:06:49.796420 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-17 03:06:49.796430 | orchestrator | Tuesday 17 February 2026 03:06:37 +0000 (0:00:00.682) 0:00:10.005 ****** 2026-02-17 03:06:49.796440 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.796449 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.796459 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.796487 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.796503 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.796519 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.796535 | orchestrator | 2026-02-17 03:06:49.796549 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-17 03:06:49.796566 | orchestrator | Tuesday 17 February 2026 03:06:38 +0000 (0:00:01.432) 0:00:11.437 ****** 2026-02-17 03:06:49.796729 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:06:49.796758 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:06:49.796805 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:06:49.796825 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:06:49.796840 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:06:49.796855 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:06:49.796871 | orchestrator | 2026-02-17 03:06:49.796888 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-17 03:06:49.796903 | orchestrator | Tuesday 17 February 2026 03:06:39 +0000 (0:00:00.886) 0:00:12.324 ****** 2026-02-17 03:06:49.796918 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:06:49.796933 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:06:49.796948 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:06:49.796962 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:06:49.796976 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:06:49.796990 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:06:49.797006 | orchestrator | 2026-02-17 03:06:49.797021 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-17 03:06:49.797036 | orchestrator | Tuesday 17 February 2026 03:06:45 +0000 (0:00:05.281) 0:00:17.605 ****** 2026-02-17 03:06:49.797051 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.797079 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.797095 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.797109 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.797122 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.797136 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.797150 | orchestrator | 2026-02-17 03:06:49.797165 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-17 03:06:49.797180 | orchestrator | Tuesday 17 February 2026 03:06:46 +0000 (0:00:01.084) 0:00:18.689 ****** 2026-02-17 03:06:49.797195 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.797210 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.797448 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.797467 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.797484 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.797498 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.797513 | orchestrator | 2026-02-17 03:06:49.797528 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-17 03:06:49.797547 | orchestrator | Tuesday 17 February 2026 03:06:47 +0000 (0:00:01.740) 0:00:20.430 ****** 2026-02-17 03:06:49.797561 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.797576 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.797591 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.797606 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.797620 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.797634 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.797650 | orchestrator | 2026-02-17 03:06:49.797664 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-17 03:06:49.797679 | orchestrator | Tuesday 17 February 2026 03:06:48 +0000 (0:00:00.744) 0:00:21.174 ****** 2026-02-17 03:06:49.797696 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-17 03:06:49.797724 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-17 03:06:49.797740 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:06:49.797755 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-17 03:06:49.797818 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-17 03:06:49.797837 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:06:49.797853 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-17 03:06:49.797867 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-17 03:06:49.797882 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:06:49.797897 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-17 03:06:49.797911 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-17 03:06:49.797926 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:06:49.797941 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-17 03:06:49.797956 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-17 03:06:49.797971 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:06:49.797987 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-17 03:06:49.798002 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-17 03:06:49.798102 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:06:49.798129 | orchestrator | 2026-02-17 03:06:49.798146 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-17 03:06:49.798192 | orchestrator | Tuesday 17 February 2026 03:06:49 +0000 (0:00:01.108) 0:00:22.283 ****** 2026-02-17 03:08:06.858448 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:08:06.858572 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:08:06.858591 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:08:06.858604 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:08:06.858618 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.858632 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.858642 | orchestrator | 2026-02-17 03:08:06.858651 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-17 03:08:06.858661 | orchestrator | Tuesday 17 February 2026 03:06:50 +0000 (0:00:00.713) 0:00:22.996 ****** 2026-02-17 03:08:06.858669 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:08:06.858677 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:08:06.858684 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:08:06.858692 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:08:06.858699 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.858707 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.858714 | orchestrator | 2026-02-17 03:08:06.858722 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-17 03:08:06.858730 | orchestrator | 2026-02-17 03:08:06.858737 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-17 03:08:06.858746 | orchestrator | Tuesday 17 February 2026 03:06:52 +0000 (0:00:01.515) 0:00:24.511 ****** 2026-02-17 03:08:06.858753 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:06.858762 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:06.858769 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:06.858819 | orchestrator | 2026-02-17 03:08:06.858826 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-17 03:08:06.858834 | orchestrator | Tuesday 17 February 2026 03:06:53 +0000 (0:00:01.951) 0:00:26.463 ****** 2026-02-17 03:08:06.858842 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:06.858849 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:06.858856 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:06.858864 | orchestrator | 2026-02-17 03:08:06.858871 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-17 03:08:06.858879 | orchestrator | Tuesday 17 February 2026 03:06:55 +0000 (0:00:01.537) 0:00:28.001 ****** 2026-02-17 03:08:06.858887 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:06.858894 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:06.858901 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:06.858909 | orchestrator | 2026-02-17 03:08:06.858917 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-17 03:08:06.858945 | orchestrator | Tuesday 17 February 2026 03:06:56 +0000 (0:00:00.909) 0:00:28.911 ****** 2026-02-17 03:08:06.858953 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:06.858960 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:06.858969 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:06.858988 | orchestrator | 2026-02-17 03:08:06.858996 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-17 03:08:06.859006 | orchestrator | Tuesday 17 February 2026 03:06:57 +0000 (0:00:00.730) 0:00:29.642 ****** 2026-02-17 03:08:06.859014 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:08:06.859023 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.859032 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.859040 | orchestrator | 2026-02-17 03:08:06.859049 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-17 03:08:06.859073 | orchestrator | Tuesday 17 February 2026 03:06:57 +0000 (0:00:00.350) 0:00:29.993 ****** 2026-02-17 03:08:06.859081 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:06.859090 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:06.859099 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:06.859108 | orchestrator | 2026-02-17 03:08:06.859117 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-17 03:08:06.859126 | orchestrator | Tuesday 17 February 2026 03:06:58 +0000 (0:00:00.954) 0:00:30.947 ****** 2026-02-17 03:08:06.859134 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:06.859143 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:06.859151 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:06.859160 | orchestrator | 2026-02-17 03:08:06.859169 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-17 03:08:06.859178 | orchestrator | Tuesday 17 February 2026 03:07:00 +0000 (0:00:01.621) 0:00:32.569 ****** 2026-02-17 03:08:06.859186 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-02-17 03:08:06.859195 | orchestrator | 2026-02-17 03:08:06.859203 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-17 03:08:06.859212 | orchestrator | Tuesday 17 February 2026 03:07:01 +0000 (0:00:01.195) 0:00:33.765 ****** 2026-02-17 03:08:06.859221 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:06.859229 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:06.859238 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:06.859246 | orchestrator | 2026-02-17 03:08:06.859255 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-17 03:08:06.859264 | orchestrator | Tuesday 17 February 2026 03:07:03 +0000 (0:00:02.364) 0:00:36.130 ****** 2026-02-17 03:08:06.859273 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.859282 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.859291 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:06.859298 | orchestrator | 2026-02-17 03:08:06.859305 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-17 03:08:06.859313 | orchestrator | Tuesday 17 February 2026 03:07:04 +0000 (0:00:00.622) 0:00:36.752 ****** 2026-02-17 03:08:06.859320 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.859328 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.859335 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:06.859342 | orchestrator | 2026-02-17 03:08:06.859350 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-17 03:08:06.859357 | orchestrator | Tuesday 17 February 2026 03:07:05 +0000 (0:00:00.996) 0:00:37.748 ****** 2026-02-17 03:08:06.859364 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.859372 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.859379 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:06.859386 | orchestrator | 2026-02-17 03:08:06.859394 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-17 03:08:06.859417 | orchestrator | Tuesday 17 February 2026 03:07:06 +0000 (0:00:01.296) 0:00:39.045 ****** 2026-02-17 03:08:06.859425 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:08:06.859440 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.859448 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.859455 | orchestrator | 2026-02-17 03:08:06.859463 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-17 03:08:06.859470 | orchestrator | Tuesday 17 February 2026 03:07:07 +0000 (0:00:00.607) 0:00:39.652 ****** 2026-02-17 03:08:06.859477 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:08:06.859485 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.859492 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.859499 | orchestrator | 2026-02-17 03:08:06.859507 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-17 03:08:06.859514 | orchestrator | Tuesday 17 February 2026 03:07:07 +0000 (0:00:00.363) 0:00:40.016 ****** 2026-02-17 03:08:06.859521 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:06.859528 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:06.859536 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:06.859543 | orchestrator | 2026-02-17 03:08:06.859555 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-17 03:08:06.859562 | orchestrator | Tuesday 17 February 2026 03:07:08 +0000 (0:00:01.180) 0:00:41.196 ****** 2026-02-17 03:08:06.859570 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:06.859577 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:06.859584 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:06.859592 | orchestrator | 2026-02-17 03:08:06.859599 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-17 03:08:06.859607 | orchestrator | Tuesday 17 February 2026 03:07:11 +0000 (0:00:02.712) 0:00:43.909 ****** 2026-02-17 03:08:06.859614 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:06.859621 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:06.859628 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:06.859639 | orchestrator | 2026-02-17 03:08:06.859647 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-17 03:08:06.859655 | orchestrator | Tuesday 17 February 2026 03:07:11 +0000 (0:00:00.362) 0:00:44.272 ****** 2026-02-17 03:08:06.859662 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-17 03:08:06.859671 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-17 03:08:06.859679 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-17 03:08:06.859693 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-17 03:08:06.859704 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-17 03:08:06.859717 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-17 03:08:06.859729 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-17 03:08:06.859737 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-17 03:08:06.859744 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-17 03:08:06.859751 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-17 03:08:06.859759 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-17 03:08:06.859818 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-17 03:08:06.859827 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-17 03:08:06.859835 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-17 03:08:06.859842 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-17 03:08:06.859849 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:06.859857 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:06.859864 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:06.859871 | orchestrator | 2026-02-17 03:08:06.859883 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-17 03:08:06.859891 | orchestrator | Tuesday 17 February 2026 03:08:05 +0000 (0:00:53.731) 0:01:38.004 ****** 2026-02-17 03:08:06.859898 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:08:06.859906 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:06.859913 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:06.859920 | orchestrator | 2026-02-17 03:08:06.859928 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-17 03:08:06.859935 | orchestrator | Tuesday 17 February 2026 03:08:05 +0000 (0:00:00.368) 0:01:38.372 ****** 2026-02-17 03:08:06.859949 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:50.153992 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:50.154131 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:50.154143 | orchestrator | 2026-02-17 03:08:50.154151 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-17 03:08:50.154159 | orchestrator | Tuesday 17 February 2026 03:08:06 +0000 (0:00:00.979) 0:01:39.352 ****** 2026-02-17 03:08:50.154166 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:50.154174 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:50.154180 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:50.154187 | orchestrator | 2026-02-17 03:08:50.154195 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-17 03:08:50.154205 | orchestrator | Tuesday 17 February 2026 03:08:08 +0000 (0:00:01.237) 0:01:40.590 ****** 2026-02-17 03:08:50.154216 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:50.154228 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:50.154239 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:50.154252 | orchestrator | 2026-02-17 03:08:50.154265 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-17 03:08:50.154278 | orchestrator | Tuesday 17 February 2026 03:08:34 +0000 (0:00:26.680) 0:02:07.270 ****** 2026-02-17 03:08:50.154285 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:50.154293 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:50.154300 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:50.154306 | orchestrator | 2026-02-17 03:08:50.154313 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-17 03:08:50.154320 | orchestrator | Tuesday 17 February 2026 03:08:35 +0000 (0:00:00.669) 0:02:07.940 ****** 2026-02-17 03:08:50.154327 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:50.154334 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:50.154341 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:50.154348 | orchestrator | 2026-02-17 03:08:50.154355 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-17 03:08:50.154361 | orchestrator | Tuesday 17 February 2026 03:08:36 +0000 (0:00:00.705) 0:02:08.645 ****** 2026-02-17 03:08:50.154368 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:50.154375 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:50.154382 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:50.154388 | orchestrator | 2026-02-17 03:08:50.154395 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-17 03:08:50.154422 | orchestrator | Tuesday 17 February 2026 03:08:36 +0000 (0:00:00.642) 0:02:09.287 ****** 2026-02-17 03:08:50.154434 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:50.154445 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:50.154457 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:50.154467 | orchestrator | 2026-02-17 03:08:50.154477 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-17 03:08:50.154487 | orchestrator | Tuesday 17 February 2026 03:08:37 +0000 (0:00:00.913) 0:02:10.201 ****** 2026-02-17 03:08:50.154497 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:50.154506 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:50.154516 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:50.154526 | orchestrator | 2026-02-17 03:08:50.154536 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-17 03:08:50.154546 | orchestrator | Tuesday 17 February 2026 03:08:38 +0000 (0:00:00.339) 0:02:10.540 ****** 2026-02-17 03:08:50.154556 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:50.154566 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:50.154577 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:50.154587 | orchestrator | 2026-02-17 03:08:50.154597 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-17 03:08:50.154608 | orchestrator | Tuesday 17 February 2026 03:08:38 +0000 (0:00:00.715) 0:02:11.256 ****** 2026-02-17 03:08:50.154619 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:50.154631 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:50.154643 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:50.154655 | orchestrator | 2026-02-17 03:08:50.154666 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-17 03:08:50.154678 | orchestrator | Tuesday 17 February 2026 03:08:39 +0000 (0:00:00.648) 0:02:11.905 ****** 2026-02-17 03:08:50.154703 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:50.154720 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:50.154729 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:50.154737 | orchestrator | 2026-02-17 03:08:50.154748 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-17 03:08:50.154761 | orchestrator | Tuesday 17 February 2026 03:08:40 +0000 (0:00:00.871) 0:02:12.776 ****** 2026-02-17 03:08:50.154798 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:08:50.154810 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:08:50.154822 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:08:50.154834 | orchestrator | 2026-02-17 03:08:50.154845 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-17 03:08:50.154856 | orchestrator | Tuesday 17 February 2026 03:08:41 +0000 (0:00:01.171) 0:02:13.947 ****** 2026-02-17 03:08:50.154868 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:08:50.154879 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:50.154888 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:50.154895 | orchestrator | 2026-02-17 03:08:50.154902 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-17 03:08:50.154909 | orchestrator | Tuesday 17 February 2026 03:08:41 +0000 (0:00:00.316) 0:02:14.264 ****** 2026-02-17 03:08:50.154916 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:08:50.154922 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:08:50.154929 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:08:50.154936 | orchestrator | 2026-02-17 03:08:50.154943 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-17 03:08:50.154949 | orchestrator | Tuesday 17 February 2026 03:08:42 +0000 (0:00:00.327) 0:02:14.591 ****** 2026-02-17 03:08:50.154956 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:50.154963 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:50.154970 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:50.154977 | orchestrator | 2026-02-17 03:08:50.154983 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-17 03:08:50.154990 | orchestrator | Tuesday 17 February 2026 03:08:42 +0000 (0:00:00.720) 0:02:15.312 ****** 2026-02-17 03:08:50.155009 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:08:50.155016 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:08:50.155040 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:08:50.155047 | orchestrator | 2026-02-17 03:08:50.155054 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-17 03:08:50.155063 | orchestrator | Tuesday 17 February 2026 03:08:43 +0000 (0:00:00.978) 0:02:16.290 ****** 2026-02-17 03:08:50.155070 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-17 03:08:50.155077 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-17 03:08:50.155084 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-17 03:08:50.155091 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-17 03:08:50.155098 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-17 03:08:50.155104 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-17 03:08:50.155111 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-17 03:08:50.155119 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-17 03:08:50.155126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-17 03:08:50.155132 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-17 03:08:50.155139 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-17 03:08:50.155146 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-17 03:08:50.155153 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-17 03:08:50.155160 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-17 03:08:50.155166 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-17 03:08:50.155173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-17 03:08:50.155179 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-17 03:08:50.155186 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-17 03:08:50.155193 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-17 03:08:50.155200 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-17 03:08:50.155207 | orchestrator | 2026-02-17 03:08:50.155213 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-17 03:08:50.155220 | orchestrator | 2026-02-17 03:08:50.155227 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-17 03:08:50.155234 | orchestrator | Tuesday 17 February 2026 03:08:46 +0000 (0:00:03.004) 0:02:19.294 ****** 2026-02-17 03:08:50.155241 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:08:50.155248 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:08:50.155254 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:08:50.155261 | orchestrator | 2026-02-17 03:08:50.155280 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-17 03:08:50.155288 | orchestrator | Tuesday 17 February 2026 03:08:47 +0000 (0:00:00.358) 0:02:19.653 ****** 2026-02-17 03:08:50.155294 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:08:50.155301 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:08:50.155308 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:08:50.155320 | orchestrator | 2026-02-17 03:08:50.155327 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-17 03:08:50.155333 | orchestrator | Tuesday 17 February 2026 03:08:48 +0000 (0:00:00.949) 0:02:20.603 ****** 2026-02-17 03:08:50.155340 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:08:50.155347 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:08:50.155354 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:08:50.155360 | orchestrator | 2026-02-17 03:08:50.155367 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-17 03:08:50.155374 | orchestrator | Tuesday 17 February 2026 03:08:48 +0000 (0:00:00.357) 0:02:20.961 ****** 2026-02-17 03:08:50.155380 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:08:50.155387 | orchestrator | 2026-02-17 03:08:50.155394 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-17 03:08:50.155401 | orchestrator | Tuesday 17 February 2026 03:08:48 +0000 (0:00:00.518) 0:02:21.479 ****** 2026-02-17 03:08:50.155408 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:08:50.155415 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:08:50.155421 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:08:50.155428 | orchestrator | 2026-02-17 03:08:50.155435 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-17 03:08:50.155442 | orchestrator | Tuesday 17 February 2026 03:08:49 +0000 (0:00:00.585) 0:02:22.064 ****** 2026-02-17 03:08:50.155448 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:08:50.155455 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:08:50.155462 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:08:50.155469 | orchestrator | 2026-02-17 03:08:50.155475 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-17 03:08:50.155482 | orchestrator | Tuesday 17 February 2026 03:08:49 +0000 (0:00:00.372) 0:02:22.437 ****** 2026-02-17 03:08:50.155493 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:10:34.480774 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:10:34.480875 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:10:34.480883 | orchestrator | 2026-02-17 03:10:34.480889 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-17 03:10:34.480895 | orchestrator | Tuesday 17 February 2026 03:08:50 +0000 (0:00:00.358) 0:02:22.795 ****** 2026-02-17 03:10:34.480900 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:10:34.480905 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:10:34.480910 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:10:34.480915 | orchestrator | 2026-02-17 03:10:34.480920 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-17 03:10:34.480924 | orchestrator | Tuesday 17 February 2026 03:08:50 +0000 (0:00:00.658) 0:02:23.454 ****** 2026-02-17 03:10:34.480929 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:10:34.480934 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:10:34.480938 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:10:34.480943 | orchestrator | 2026-02-17 03:10:34.480947 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-17 03:10:34.480952 | orchestrator | Tuesday 17 February 2026 03:08:52 +0000 (0:00:01.468) 0:02:24.923 ****** 2026-02-17 03:10:34.480956 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:10:34.480961 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:10:34.480966 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:10:34.480970 | orchestrator | 2026-02-17 03:10:34.480975 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-17 03:10:34.480979 | orchestrator | Tuesday 17 February 2026 03:08:53 +0000 (0:00:01.274) 0:02:26.197 ****** 2026-02-17 03:10:34.480984 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:10:34.480989 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:10:34.480993 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:10:34.480998 | orchestrator | 2026-02-17 03:10:34.481002 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-17 03:10:34.481020 | orchestrator | 2026-02-17 03:10:34.481025 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-17 03:10:34.481030 | orchestrator | Tuesday 17 February 2026 03:09:03 +0000 (0:00:09.980) 0:02:36.178 ****** 2026-02-17 03:10:34.481035 | orchestrator | ok: [testbed-manager] 2026-02-17 03:10:34.481040 | orchestrator | 2026-02-17 03:10:34.481044 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-17 03:10:34.481049 | orchestrator | Tuesday 17 February 2026 03:09:04 +0000 (0:00:00.840) 0:02:37.018 ****** 2026-02-17 03:10:34.481053 | orchestrator | changed: [testbed-manager] 2026-02-17 03:10:34.481058 | orchestrator | 2026-02-17 03:10:34.481063 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-17 03:10:34.481067 | orchestrator | Tuesday 17 February 2026 03:09:05 +0000 (0:00:00.762) 0:02:37.781 ****** 2026-02-17 03:10:34.481072 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-17 03:10:34.481076 | orchestrator | 2026-02-17 03:10:34.481081 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-17 03:10:34.481085 | orchestrator | Tuesday 17 February 2026 03:09:05 +0000 (0:00:00.545) 0:02:38.327 ****** 2026-02-17 03:10:34.481090 | orchestrator | changed: [testbed-manager] 2026-02-17 03:10:34.481094 | orchestrator | 2026-02-17 03:10:34.481099 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-17 03:10:34.481103 | orchestrator | Tuesday 17 February 2026 03:09:06 +0000 (0:00:01.012) 0:02:39.339 ****** 2026-02-17 03:10:34.481108 | orchestrator | changed: [testbed-manager] 2026-02-17 03:10:34.481112 | orchestrator | 2026-02-17 03:10:34.481117 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-17 03:10:34.481121 | orchestrator | Tuesday 17 February 2026 03:09:07 +0000 (0:00:00.684) 0:02:40.024 ****** 2026-02-17 03:10:34.481126 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-17 03:10:34.481131 | orchestrator | 2026-02-17 03:10:34.481135 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-17 03:10:34.481140 | orchestrator | Tuesday 17 February 2026 03:09:09 +0000 (0:00:01.886) 0:02:41.910 ****** 2026-02-17 03:10:34.481144 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-17 03:10:34.481149 | orchestrator | 2026-02-17 03:10:34.481170 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-17 03:10:34.481175 | orchestrator | Tuesday 17 February 2026 03:09:10 +0000 (0:00:00.972) 0:02:42.882 ****** 2026-02-17 03:10:34.481179 | orchestrator | changed: [testbed-manager] 2026-02-17 03:10:34.481184 | orchestrator | 2026-02-17 03:10:34.481188 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-17 03:10:34.481193 | orchestrator | Tuesday 17 February 2026 03:09:10 +0000 (0:00:00.490) 0:02:43.373 ****** 2026-02-17 03:10:34.481197 | orchestrator | changed: [testbed-manager] 2026-02-17 03:10:34.481202 | orchestrator | 2026-02-17 03:10:34.481206 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-17 03:10:34.481211 | orchestrator | 2026-02-17 03:10:34.481215 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-17 03:10:34.481220 | orchestrator | Tuesday 17 February 2026 03:09:11 +0000 (0:00:00.513) 0:02:43.886 ****** 2026-02-17 03:10:34.481225 | orchestrator | ok: [testbed-manager] 2026-02-17 03:10:34.481230 | orchestrator | 2026-02-17 03:10:34.481234 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-17 03:10:34.481239 | orchestrator | Tuesday 17 February 2026 03:09:11 +0000 (0:00:00.441) 0:02:44.328 ****** 2026-02-17 03:10:34.481243 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 03:10:34.481249 | orchestrator | 2026-02-17 03:10:34.481256 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-17 03:10:34.481263 | orchestrator | Tuesday 17 February 2026 03:09:12 +0000 (0:00:00.263) 0:02:44.591 ****** 2026-02-17 03:10:34.481270 | orchestrator | ok: [testbed-manager] 2026-02-17 03:10:34.481276 | orchestrator | 2026-02-17 03:10:34.481290 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-17 03:10:34.481299 | orchestrator | Tuesday 17 February 2026 03:09:13 +0000 (0:00:01.058) 0:02:45.649 ****** 2026-02-17 03:10:34.481308 | orchestrator | ok: [testbed-manager] 2026-02-17 03:10:34.481315 | orchestrator | 2026-02-17 03:10:34.481336 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-17 03:10:34.481342 | orchestrator | Tuesday 17 February 2026 03:09:15 +0000 (0:00:02.114) 0:02:47.763 ****** 2026-02-17 03:10:34.481347 | orchestrator | changed: [testbed-manager] 2026-02-17 03:10:34.481352 | orchestrator | 2026-02-17 03:10:34.481357 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-17 03:10:34.481362 | orchestrator | Tuesday 17 February 2026 03:09:16 +0000 (0:00:00.976) 0:02:48.740 ****** 2026-02-17 03:10:34.481367 | orchestrator | ok: [testbed-manager] 2026-02-17 03:10:34.481373 | orchestrator | 2026-02-17 03:10:34.481378 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-17 03:10:34.481383 | orchestrator | Tuesday 17 February 2026 03:09:16 +0000 (0:00:00.522) 0:02:49.263 ****** 2026-02-17 03:10:34.481388 | orchestrator | changed: [testbed-manager] 2026-02-17 03:10:34.481393 | orchestrator | 2026-02-17 03:10:34.481398 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-17 03:10:34.481403 | orchestrator | Tuesday 17 February 2026 03:09:25 +0000 (0:00:09.084) 0:02:58.348 ****** 2026-02-17 03:10:34.481408 | orchestrator | changed: [testbed-manager] 2026-02-17 03:10:34.481413 | orchestrator | 2026-02-17 03:10:34.481419 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-17 03:10:34.481424 | orchestrator | Tuesday 17 February 2026 03:09:39 +0000 (0:00:13.663) 0:03:12.011 ****** 2026-02-17 03:10:34.481429 | orchestrator | ok: [testbed-manager] 2026-02-17 03:10:34.481434 | orchestrator | 2026-02-17 03:10:34.481438 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-17 03:10:34.481444 | orchestrator | 2026-02-17 03:10:34.481449 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-17 03:10:34.481454 | orchestrator | Tuesday 17 February 2026 03:09:40 +0000 (0:00:00.994) 0:03:13.005 ****** 2026-02-17 03:10:34.481459 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:10:34.481464 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:10:34.481470 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:10:34.481475 | orchestrator | 2026-02-17 03:10:34.481480 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-17 03:10:34.481484 | orchestrator | Tuesday 17 February 2026 03:09:40 +0000 (0:00:00.409) 0:03:13.415 ****** 2026-02-17 03:10:34.481489 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:10:34.481493 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:10:34.481498 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:10:34.481502 | orchestrator | 2026-02-17 03:10:34.481507 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-17 03:10:34.481511 | orchestrator | Tuesday 17 February 2026 03:09:41 +0000 (0:00:00.382) 0:03:13.797 ****** 2026-02-17 03:10:34.481516 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:10:34.481520 | orchestrator | 2026-02-17 03:10:34.481525 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-17 03:10:34.481529 | orchestrator | Tuesday 17 February 2026 03:09:42 +0000 (0:00:00.866) 0:03:14.663 ****** 2026-02-17 03:10:34.481534 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-17 03:10:34.481539 | orchestrator | 2026-02-17 03:10:34.481543 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-17 03:10:34.481547 | orchestrator | Tuesday 17 February 2026 03:09:43 +0000 (0:00:00.912) 0:03:15.575 ****** 2026-02-17 03:10:34.481552 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 03:10:34.481556 | orchestrator | 2026-02-17 03:10:34.481561 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-17 03:10:34.481569 | orchestrator | Tuesday 17 February 2026 03:09:44 +0000 (0:00:00.970) 0:03:16.545 ****** 2026-02-17 03:10:34.481574 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:10:34.481578 | orchestrator | 2026-02-17 03:10:34.481583 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-17 03:10:34.481587 | orchestrator | Tuesday 17 February 2026 03:09:44 +0000 (0:00:00.140) 0:03:16.686 ****** 2026-02-17 03:10:34.481592 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 03:10:34.481596 | orchestrator | 2026-02-17 03:10:34.481601 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-17 03:10:34.481605 | orchestrator | Tuesday 17 February 2026 03:09:45 +0000 (0:00:01.028) 0:03:17.714 ****** 2026-02-17 03:10:34.481610 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:10:34.481614 | orchestrator | 2026-02-17 03:10:34.481618 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-17 03:10:34.481623 | orchestrator | Tuesday 17 February 2026 03:09:45 +0000 (0:00:00.158) 0:03:17.873 ****** 2026-02-17 03:10:34.481628 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:10:34.481632 | orchestrator | 2026-02-17 03:10:34.481636 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-17 03:10:34.481641 | orchestrator | Tuesday 17 February 2026 03:09:45 +0000 (0:00:00.127) 0:03:18.000 ****** 2026-02-17 03:10:34.481645 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:10:34.481650 | orchestrator | 2026-02-17 03:10:34.481654 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-17 03:10:34.481662 | orchestrator | Tuesday 17 February 2026 03:09:45 +0000 (0:00:00.140) 0:03:18.141 ****** 2026-02-17 03:10:34.481667 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:10:34.481671 | orchestrator | 2026-02-17 03:10:34.481676 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-17 03:10:34.481680 | orchestrator | Tuesday 17 February 2026 03:09:45 +0000 (0:00:00.125) 0:03:18.267 ****** 2026-02-17 03:10:34.481685 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-17 03:10:34.481689 | orchestrator | 2026-02-17 03:10:34.481694 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-17 03:10:34.481698 | orchestrator | Tuesday 17 February 2026 03:09:51 +0000 (0:00:06.012) 0:03:24.280 ****** 2026-02-17 03:10:34.481703 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-17 03:10:34.481708 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-17 03:10:34.481716 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-17 03:11:00.654070 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-17 03:11:00.654198 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-17 03:11:00.654220 | orchestrator | 2026-02-17 03:11:00.654237 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-17 03:11:00.654252 | orchestrator | Tuesday 17 February 2026 03:10:34 +0000 (0:00:42.696) 0:04:06.976 ****** 2026-02-17 03:11:00.654266 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 03:11:00.654280 | orchestrator | 2026-02-17 03:11:00.654295 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-17 03:11:00.654310 | orchestrator | Tuesday 17 February 2026 03:10:35 +0000 (0:00:01.459) 0:04:08.435 ****** 2026-02-17 03:11:00.654324 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-17 03:11:00.654339 | orchestrator | 2026-02-17 03:11:00.654354 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-17 03:11:00.654368 | orchestrator | Tuesday 17 February 2026 03:10:37 +0000 (0:00:01.793) 0:04:10.229 ****** 2026-02-17 03:11:00.654382 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-17 03:11:00.654397 | orchestrator | 2026-02-17 03:11:00.654411 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-17 03:11:00.654426 | orchestrator | Tuesday 17 February 2026 03:10:39 +0000 (0:00:01.552) 0:04:11.781 ****** 2026-02-17 03:11:00.654471 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:11:00.654488 | orchestrator | 2026-02-17 03:11:00.654504 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-17 03:11:00.654519 | orchestrator | Tuesday 17 February 2026 03:10:39 +0000 (0:00:00.138) 0:04:11.919 ****** 2026-02-17 03:11:00.654535 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-17 03:11:00.654553 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-17 03:11:00.654569 | orchestrator | 2026-02-17 03:11:00.654584 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-17 03:11:00.654598 | orchestrator | Tuesday 17 February 2026 03:10:41 +0000 (0:00:02.038) 0:04:13.958 ****** 2026-02-17 03:11:00.654613 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:11:00.654628 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:11:00.654644 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:11:00.654660 | orchestrator | 2026-02-17 03:11:00.654675 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-17 03:11:00.654689 | orchestrator | Tuesday 17 February 2026 03:10:41 +0000 (0:00:00.380) 0:04:14.338 ****** 2026-02-17 03:11:00.654702 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:11:00.654715 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:11:00.654729 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:11:00.654742 | orchestrator | 2026-02-17 03:11:00.654754 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-17 03:11:00.654767 | orchestrator | 2026-02-17 03:11:00.654805 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-17 03:11:00.654821 | orchestrator | Tuesday 17 February 2026 03:10:42 +0000 (0:00:00.971) 0:04:15.310 ****** 2026-02-17 03:11:00.654835 | orchestrator | ok: [testbed-manager] 2026-02-17 03:11:00.654847 | orchestrator | 2026-02-17 03:11:00.654861 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-17 03:11:00.654875 | orchestrator | Tuesday 17 February 2026 03:10:43 +0000 (0:00:00.411) 0:04:15.721 ****** 2026-02-17 03:11:00.654889 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 03:11:00.654902 | orchestrator | 2026-02-17 03:11:00.654915 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-17 03:11:00.654929 | orchestrator | Tuesday 17 February 2026 03:10:43 +0000 (0:00:00.257) 0:04:15.979 ****** 2026-02-17 03:11:00.654943 | orchestrator | changed: [testbed-manager] 2026-02-17 03:11:00.654956 | orchestrator | 2026-02-17 03:11:00.654970 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-17 03:11:00.654984 | orchestrator | 2026-02-17 03:11:00.654999 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-17 03:11:00.655014 | orchestrator | Tuesday 17 February 2026 03:10:49 +0000 (0:00:06.169) 0:04:22.148 ****** 2026-02-17 03:11:00.655027 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:11:00.655040 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:11:00.655054 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:11:00.655068 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:11:00.655082 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:11:00.655095 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:11:00.655108 | orchestrator | 2026-02-17 03:11:00.655123 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-17 03:11:00.655138 | orchestrator | Tuesday 17 February 2026 03:10:50 +0000 (0:00:00.674) 0:04:22.823 ****** 2026-02-17 03:11:00.655153 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-17 03:11:00.655166 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-17 03:11:00.655180 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-17 03:11:00.655193 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-17 03:11:00.655222 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-17 03:11:00.655235 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-17 03:11:00.655248 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-17 03:11:00.655262 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-17 03:11:00.655276 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-17 03:11:00.655317 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-17 03:11:00.655332 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-17 03:11:00.655347 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-17 03:11:00.655361 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-17 03:11:00.655372 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-17 03:11:00.655382 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-17 03:11:00.655413 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-17 03:11:00.655424 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-17 03:11:00.655435 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-17 03:11:00.655446 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-17 03:11:00.655457 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-17 03:11:00.655468 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-17 03:11:00.655478 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-17 03:11:00.655489 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-17 03:11:00.655501 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-17 03:11:00.655512 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-17 03:11:00.655523 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-17 03:11:00.655534 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-17 03:11:00.655545 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-17 03:11:00.655557 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-17 03:11:00.655569 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-17 03:11:00.655579 | orchestrator | 2026-02-17 03:11:00.655591 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-17 03:11:00.655602 | orchestrator | Tuesday 17 February 2026 03:10:59 +0000 (0:00:08.887) 0:04:31.710 ****** 2026-02-17 03:11:00.655613 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:11:00.655624 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:11:00.655635 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:11:00.655647 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:11:00.655658 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:11:00.655669 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:11:00.655680 | orchestrator | 2026-02-17 03:11:00.655692 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-17 03:11:00.655704 | orchestrator | Tuesday 17 February 2026 03:10:59 +0000 (0:00:00.600) 0:04:32.311 ****** 2026-02-17 03:11:00.655716 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:11:00.655739 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:11:00.655749 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:11:00.655760 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:11:00.655770 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:11:00.655811 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:11:00.655824 | orchestrator | 2026-02-17 03:11:00.655835 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:11:00.655846 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:11:00.655859 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-17 03:11:00.655871 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-17 03:11:00.655882 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-17 03:11:00.655893 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 03:11:00.655903 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 03:11:00.655914 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 03:11:00.655925 | orchestrator | 2026-02-17 03:11:00.655936 | orchestrator | 2026-02-17 03:11:00.655948 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:11:00.655960 | orchestrator | Tuesday 17 February 2026 03:11:00 +0000 (0:00:00.828) 0:04:33.140 ****** 2026-02-17 03:11:00.655984 | orchestrator | =============================================================================== 2026-02-17 03:11:01.192358 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.73s 2026-02-17 03:11:01.192483 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.70s 2026-02-17 03:11:01.192502 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.68s 2026-02-17 03:11:01.192515 | orchestrator | kubectl : Install required packages ------------------------------------ 13.66s 2026-02-17 03:11:01.192526 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.98s 2026-02-17 03:11:01.192537 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.09s 2026-02-17 03:11:01.192548 | orchestrator | Manage labels ----------------------------------------------------------- 8.89s 2026-02-17 03:11:01.192559 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.17s 2026-02-17 03:11:01.192570 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.01s 2026-02-17 03:11:01.192581 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.28s 2026-02-17 03:11:01.192592 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.00s 2026-02-17 03:11:01.192605 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.71s 2026-02-17 03:11:01.192616 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.36s 2026-02-17 03:11:01.192626 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.11s 2026-02-17 03:11:01.192637 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.04s 2026-02-17 03:11:01.192648 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.00s 2026-02-17 03:11:01.192659 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.95s 2026-02-17 03:11:01.192699 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.89s 2026-02-17 03:11:01.192710 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.79s 2026-02-17 03:11:01.192721 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.74s 2026-02-17 03:11:01.604707 | orchestrator | + osism apply copy-kubeconfig 2026-02-17 03:11:13.910280 | orchestrator | 2026-02-17 03:11:13 | INFO  | Task 03fd7af6-ebba-4e0c-a87f-52710b507bc8 (copy-kubeconfig) was prepared for execution. 2026-02-17 03:11:13.910460 | orchestrator | 2026-02-17 03:11:13 | INFO  | It takes a moment until task 03fd7af6-ebba-4e0c-a87f-52710b507bc8 (copy-kubeconfig) has been started and output is visible here. 2026-02-17 03:11:21.710188 | orchestrator | 2026-02-17 03:11:21.710341 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-17 03:11:21.710369 | orchestrator | 2026-02-17 03:11:21.710381 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-17 03:11:21.710392 | orchestrator | Tuesday 17 February 2026 03:11:18 +0000 (0:00:00.181) 0:00:00.182 ****** 2026-02-17 03:11:21.710400 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-17 03:11:21.710407 | orchestrator | 2026-02-17 03:11:21.710414 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-17 03:11:21.710443 | orchestrator | Tuesday 17 February 2026 03:11:19 +0000 (0:00:00.761) 0:00:00.944 ****** 2026-02-17 03:11:21.710451 | orchestrator | changed: [testbed-manager] 2026-02-17 03:11:21.710459 | orchestrator | 2026-02-17 03:11:21.710466 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-17 03:11:21.710473 | orchestrator | Tuesday 17 February 2026 03:11:20 +0000 (0:00:01.370) 0:00:02.315 ****** 2026-02-17 03:11:21.710484 | orchestrator | changed: [testbed-manager] 2026-02-17 03:11:21.710490 | orchestrator | 2026-02-17 03:11:21.710501 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:11:21.710508 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:11:21.710517 | orchestrator | 2026-02-17 03:11:21.710526 | orchestrator | 2026-02-17 03:11:21.710536 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:11:21.710546 | orchestrator | Tuesday 17 February 2026 03:11:21 +0000 (0:00:00.525) 0:00:02.840 ****** 2026-02-17 03:11:21.710556 | orchestrator | =============================================================================== 2026-02-17 03:11:21.710564 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.37s 2026-02-17 03:11:21.710573 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2026-02-17 03:11:21.710582 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.53s 2026-02-17 03:11:22.167010 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-17 03:11:34.746739 | orchestrator | 2026-02-17 03:11:34 | INFO  | Task 67cf25e8-b50b-4b78-932a-d33b27b7b524 (openstackclient) was prepared for execution. 2026-02-17 03:11:34.746909 | orchestrator | 2026-02-17 03:11:34 | INFO  | It takes a moment until task 67cf25e8-b50b-4b78-932a-d33b27b7b524 (openstackclient) has been started and output is visible here. 2026-02-17 03:12:25.160454 | orchestrator | 2026-02-17 03:12:25.160545 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-17 03:12:25.160557 | orchestrator | 2026-02-17 03:12:25.160562 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-17 03:12:25.160566 | orchestrator | Tuesday 17 February 2026 03:11:39 +0000 (0:00:00.248) 0:00:00.248 ****** 2026-02-17 03:12:25.160572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-17 03:12:25.160578 | orchestrator | 2026-02-17 03:12:25.160599 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-17 03:12:25.160603 | orchestrator | Tuesday 17 February 2026 03:11:40 +0000 (0:00:00.249) 0:00:00.498 ****** 2026-02-17 03:12:25.160608 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-17 03:12:25.160613 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-17 03:12:25.160618 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-17 03:12:25.160622 | orchestrator | 2026-02-17 03:12:25.160626 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-17 03:12:25.160630 | orchestrator | Tuesday 17 February 2026 03:11:41 +0000 (0:00:01.352) 0:00:01.851 ****** 2026-02-17 03:12:25.160634 | orchestrator | changed: [testbed-manager] 2026-02-17 03:12:25.160638 | orchestrator | 2026-02-17 03:12:25.160642 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-17 03:12:25.160646 | orchestrator | Tuesday 17 February 2026 03:11:43 +0000 (0:00:01.559) 0:00:03.411 ****** 2026-02-17 03:12:25.160650 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-17 03:12:25.160656 | orchestrator | ok: [testbed-manager] 2026-02-17 03:12:25.160660 | orchestrator | 2026-02-17 03:12:25.160664 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-17 03:12:25.160668 | orchestrator | Tuesday 17 February 2026 03:12:19 +0000 (0:00:36.614) 0:00:40.025 ****** 2026-02-17 03:12:25.160672 | orchestrator | changed: [testbed-manager] 2026-02-17 03:12:25.160676 | orchestrator | 2026-02-17 03:12:25.160679 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-17 03:12:25.160683 | orchestrator | Tuesday 17 February 2026 03:12:20 +0000 (0:00:00.952) 0:00:40.977 ****** 2026-02-17 03:12:25.160687 | orchestrator | ok: [testbed-manager] 2026-02-17 03:12:25.160691 | orchestrator | 2026-02-17 03:12:25.160695 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-17 03:12:25.160699 | orchestrator | Tuesday 17 February 2026 03:12:21 +0000 (0:00:00.654) 0:00:41.631 ****** 2026-02-17 03:12:25.160702 | orchestrator | changed: [testbed-manager] 2026-02-17 03:12:25.160706 | orchestrator | 2026-02-17 03:12:25.160711 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-17 03:12:25.160715 | orchestrator | Tuesday 17 February 2026 03:12:22 +0000 (0:00:01.621) 0:00:43.253 ****** 2026-02-17 03:12:25.160718 | orchestrator | changed: [testbed-manager] 2026-02-17 03:12:25.160722 | orchestrator | 2026-02-17 03:12:25.160727 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-17 03:12:25.160733 | orchestrator | Tuesday 17 February 2026 03:12:23 +0000 (0:00:00.761) 0:00:44.014 ****** 2026-02-17 03:12:25.160740 | orchestrator | changed: [testbed-manager] 2026-02-17 03:12:25.160746 | orchestrator | 2026-02-17 03:12:25.160752 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-17 03:12:25.160758 | orchestrator | Tuesday 17 February 2026 03:12:24 +0000 (0:00:00.592) 0:00:44.606 ****** 2026-02-17 03:12:25.160764 | orchestrator | ok: [testbed-manager] 2026-02-17 03:12:25.160771 | orchestrator | 2026-02-17 03:12:25.160775 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:12:25.160779 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:12:25.160784 | orchestrator | 2026-02-17 03:12:25.160788 | orchestrator | 2026-02-17 03:12:25.160792 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:12:25.160840 | orchestrator | Tuesday 17 February 2026 03:12:24 +0000 (0:00:00.446) 0:00:45.053 ****** 2026-02-17 03:12:25.160844 | orchestrator | =============================================================================== 2026-02-17 03:12:25.160848 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.61s 2026-02-17 03:12:25.160852 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.62s 2026-02-17 03:12:25.160861 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.56s 2026-02-17 03:12:25.160865 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.35s 2026-02-17 03:12:25.160869 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.95s 2026-02-17 03:12:25.160873 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.76s 2026-02-17 03:12:25.160877 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.65s 2026-02-17 03:12:25.160880 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.59s 2026-02-17 03:12:25.160884 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.45s 2026-02-17 03:12:25.160888 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.25s 2026-02-17 03:12:27.715479 | orchestrator | 2026-02-17 03:12:27 | INFO  | Task 6593c4c0-b6df-42f2-a0a9-04b53c0b02cd (common) was prepared for execution. 2026-02-17 03:12:27.715582 | orchestrator | 2026-02-17 03:12:27 | INFO  | It takes a moment until task 6593c4c0-b6df-42f2-a0a9-04b53c0b02cd (common) has been started and output is visible here. 2026-02-17 03:12:40.833874 | orchestrator | 2026-02-17 03:12:40.834013 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-17 03:12:40.834122 | orchestrator | 2026-02-17 03:12:40.834139 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-17 03:12:40.834151 | orchestrator | Tuesday 17 February 2026 03:12:32 +0000 (0:00:00.319) 0:00:00.319 ****** 2026-02-17 03:12:40.834163 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:12:40.834175 | orchestrator | 2026-02-17 03:12:40.834186 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-17 03:12:40.834198 | orchestrator | Tuesday 17 February 2026 03:12:33 +0000 (0:00:01.393) 0:00:01.713 ****** 2026-02-17 03:12:40.834208 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 03:12:40.834220 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 03:12:40.834231 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 03:12:40.834242 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 03:12:40.834253 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 03:12:40.834264 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 03:12:40.834275 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 03:12:40.834285 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 03:12:40.834317 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 03:12:40.834344 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 03:12:40.834367 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 03:12:40.834382 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 03:12:40.834394 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 03:12:40.834406 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 03:12:40.834418 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 03:12:40.834431 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 03:12:40.834443 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 03:12:40.834480 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 03:12:40.834493 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 03:12:40.834505 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 03:12:40.834517 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 03:12:40.834529 | orchestrator | 2026-02-17 03:12:40.834542 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-17 03:12:40.834554 | orchestrator | Tuesday 17 February 2026 03:12:36 +0000 (0:00:02.832) 0:00:04.545 ****** 2026-02-17 03:12:40.834566 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:12:40.834580 | orchestrator | 2026-02-17 03:12:40.834592 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-17 03:12:40.834617 | orchestrator | Tuesday 17 February 2026 03:12:38 +0000 (0:00:01.501) 0:00:06.047 ****** 2026-02-17 03:12:40.834641 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:40.834666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:40.834721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:40.834745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:40.834764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:40.834786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:40.834847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:40.834865 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:40.834877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:40.834910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.935916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936061 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:41.936174 | orchestrator | 2026-02-17 03:12:41.936184 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-17 03:12:41.936194 | orchestrator | Tuesday 17 February 2026 03:12:41 +0000 (0:00:03.536) 0:00:09.584 ****** 2026-02-17 03:12:41.936205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:41.936215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:41.936225 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:41.936234 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:12:41.936245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:41.936266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.585881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586122 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:12:42.586195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:42.586212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586234 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:12:42.586245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:42.586260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586281 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:12:42.586311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:42.586332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586356 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:12:42.586368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:42.586380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:42.586403 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:12:42.586415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:42.586434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.509884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.509957 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:12:43.509965 | orchestrator | 2026-02-17 03:12:43.509970 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-17 03:12:43.509975 | orchestrator | Tuesday 17 February 2026 03:12:42 +0000 (0:00:00.974) 0:00:10.558 ****** 2026-02-17 03:12:43.509981 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:43.509987 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.509992 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.510009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:43.510050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.510071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.510075 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:12:43.510079 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:12:43.510101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:43.510105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.510110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.510113 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:12:43.510117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:43.510121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.510128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:43.510136 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:12:43.510140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:43.510155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:48.932584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:48.932664 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:12:48.932673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:48.932681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:48.932687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:48.932693 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:12:48.932698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 03:12:48.932722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:48.932728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:12:48.932733 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:12:48.932739 | orchestrator | 2026-02-17 03:12:48.932745 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-17 03:12:48.932751 | orchestrator | Tuesday 17 February 2026 03:12:44 +0000 (0:00:01.920) 0:00:12.478 ****** 2026-02-17 03:12:48.932757 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:12:48.932762 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:12:48.932767 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:12:48.932772 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:12:48.932788 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:12:48.932794 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:12:48.932859 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:12:48.932866 | orchestrator | 2026-02-17 03:12:48.932871 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-17 03:12:48.932877 | orchestrator | Tuesday 17 February 2026 03:12:45 +0000 (0:00:00.784) 0:00:13.263 ****** 2026-02-17 03:12:48.932882 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:12:48.932887 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:12:48.932893 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:12:48.932898 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:12:48.932904 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:12:48.932909 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:12:48.932914 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:12:48.932920 | orchestrator | 2026-02-17 03:12:48.932925 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-17 03:12:48.932930 | orchestrator | Tuesday 17 February 2026 03:12:46 +0000 (0:00:00.909) 0:00:14.173 ****** 2026-02-17 03:12:48.932938 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:48.932962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:48.932978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:48.932996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:48.933005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:48.933013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:48.933035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:12:51.762653 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762936 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762960 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:12:51.762990 | orchestrator | 2026-02-17 03:12:51.762999 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-17 03:12:51.763008 | orchestrator | Tuesday 17 February 2026 03:12:49 +0000 (0:00:03.399) 0:00:17.573 ****** 2026-02-17 03:12:51.763015 | orchestrator | [WARNING]: Skipped 2026-02-17 03:12:51.763023 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-17 03:12:51.763032 | orchestrator | to this access issue: 2026-02-17 03:12:51.763039 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-17 03:12:51.763047 | orchestrator | directory 2026-02-17 03:12:51.763054 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 03:12:51.763062 | orchestrator | 2026-02-17 03:12:51.763070 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-17 03:12:51.763077 | orchestrator | Tuesday 17 February 2026 03:12:50 +0000 (0:00:01.145) 0:00:18.719 ****** 2026-02-17 03:12:51.763083 | orchestrator | [WARNING]: Skipped 2026-02-17 03:12:51.763095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-17 03:13:02.444993 | orchestrator | to this access issue: 2026-02-17 03:13:02.445096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-17 03:13:02.445110 | orchestrator | directory 2026-02-17 03:13:02.445117 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 03:13:02.445126 | orchestrator | 2026-02-17 03:13:02.445133 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-17 03:13:02.445142 | orchestrator | Tuesday 17 February 2026 03:12:52 +0000 (0:00:01.330) 0:00:20.049 ****** 2026-02-17 03:13:02.445169 | orchestrator | [WARNING]: Skipped 2026-02-17 03:13:02.445177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-17 03:13:02.445183 | orchestrator | to this access issue: 2026-02-17 03:13:02.445190 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-17 03:13:02.445196 | orchestrator | directory 2026-02-17 03:13:02.445203 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 03:13:02.445210 | orchestrator | 2026-02-17 03:13:02.445218 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-17 03:13:02.445225 | orchestrator | Tuesday 17 February 2026 03:12:52 +0000 (0:00:00.922) 0:00:20.972 ****** 2026-02-17 03:13:02.445231 | orchestrator | [WARNING]: Skipped 2026-02-17 03:13:02.445237 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-17 03:13:02.445244 | orchestrator | to this access issue: 2026-02-17 03:13:02.445251 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-17 03:13:02.445258 | orchestrator | directory 2026-02-17 03:13:02.445265 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 03:13:02.445271 | orchestrator | 2026-02-17 03:13:02.445278 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-17 03:13:02.445285 | orchestrator | Tuesday 17 February 2026 03:12:53 +0000 (0:00:00.910) 0:00:21.883 ****** 2026-02-17 03:13:02.445292 | orchestrator | changed: [testbed-manager] 2026-02-17 03:13:02.445299 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:13:02.445306 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:13:02.445312 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:13:02.445319 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:13:02.445326 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:13:02.445348 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:13:02.445355 | orchestrator | 2026-02-17 03:13:02.445362 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-17 03:13:02.445370 | orchestrator | Tuesday 17 February 2026 03:12:56 +0000 (0:00:02.711) 0:00:24.594 ****** 2026-02-17 03:13:02.445377 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 03:13:02.445386 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 03:13:02.445393 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 03:13:02.445400 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 03:13:02.445406 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 03:13:02.445413 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 03:13:02.445426 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 03:13:02.445432 | orchestrator | 2026-02-17 03:13:02.445440 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-17 03:13:02.445447 | orchestrator | Tuesday 17 February 2026 03:12:58 +0000 (0:00:02.249) 0:00:26.844 ****** 2026-02-17 03:13:02.445454 | orchestrator | changed: [testbed-manager] 2026-02-17 03:13:02.445461 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:13:02.445468 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:13:02.445474 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:13:02.445480 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:13:02.445487 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:13:02.445493 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:13:02.445500 | orchestrator | 2026-02-17 03:13:02.445507 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-17 03:13:02.445521 | orchestrator | Tuesday 17 February 2026 03:13:01 +0000 (0:00:02.147) 0:00:28.992 ****** 2026-02-17 03:13:02.445532 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:02.445559 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:13:02.445569 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:02.445579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:13:02.445587 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:02.445595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:13:02.445607 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:02.445621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:13:02.445636 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:08.864075 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:08.864159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:13:08.864170 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:08.864176 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:08.864189 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:08.864195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:13:08.864215 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:08.864221 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:08.864239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:13:08.864244 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:08.864249 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:08.864255 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:08.864261 | orchestrator | 2026-02-17 03:13:08.864267 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-17 03:13:08.864273 | orchestrator | Tuesday 17 February 2026 03:13:02 +0000 (0:00:01.633) 0:00:30.625 ****** 2026-02-17 03:13:08.864278 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 03:13:08.864284 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 03:13:08.864293 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 03:13:08.864298 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 03:13:08.864303 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 03:13:08.864308 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 03:13:08.864313 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 03:13:08.864318 | orchestrator | 2026-02-17 03:13:08.864323 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-17 03:13:08.864328 | orchestrator | Tuesday 17 February 2026 03:13:04 +0000 (0:00:02.132) 0:00:32.758 ****** 2026-02-17 03:13:08.864334 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 03:13:08.864339 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 03:13:08.864344 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 03:13:08.864353 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 03:13:08.864358 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 03:13:08.864363 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 03:13:08.864368 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 03:13:08.864373 | orchestrator | 2026-02-17 03:13:08.864378 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-17 03:13:08.864382 | orchestrator | Tuesday 17 February 2026 03:13:06 +0000 (0:00:01.954) 0:00:34.713 ****** 2026-02-17 03:13:08.864395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:09.391203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:09.391335 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:09.391361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:09.391414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:09.391454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:09.391474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 03:13:09.391494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:09.391541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:09.391562 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:09.391581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:09.391612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:09.391638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:09.391658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:09.391678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:13:09.391709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:14:31.947108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:14:31.947260 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:14:31.947356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:14:31.947383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:14:31.947412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:14:31.947424 | orchestrator | 2026-02-17 03:14:31.947438 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-17 03:14:31.947451 | orchestrator | Tuesday 17 February 2026 03:13:09 +0000 (0:00:02.652) 0:00:37.365 ****** 2026-02-17 03:14:31.947462 | orchestrator | changed: [testbed-manager] 2026-02-17 03:14:31.947474 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:14:31.947485 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:14:31.947497 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:14:31.947516 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:14:31.947534 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:14:31.947553 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:14:31.947572 | orchestrator | 2026-02-17 03:14:31.947590 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-17 03:14:31.947609 | orchestrator | Tuesday 17 February 2026 03:13:10 +0000 (0:00:01.508) 0:00:38.874 ****** 2026-02-17 03:14:31.947628 | orchestrator | changed: [testbed-manager] 2026-02-17 03:14:31.947649 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:14:31.947669 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:14:31.947688 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:14:31.947705 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:14:31.947718 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:14:31.947730 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:14:31.947742 | orchestrator | 2026-02-17 03:14:31.947754 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 03:14:31.947767 | orchestrator | Tuesday 17 February 2026 03:13:11 +0000 (0:00:01.107) 0:00:39.981 ****** 2026-02-17 03:14:31.947779 | orchestrator | 2026-02-17 03:14:31.947792 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 03:14:31.947804 | orchestrator | Tuesday 17 February 2026 03:13:12 +0000 (0:00:00.086) 0:00:40.068 ****** 2026-02-17 03:14:31.947815 | orchestrator | 2026-02-17 03:14:31.947869 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 03:14:31.947882 | orchestrator | Tuesday 17 February 2026 03:13:12 +0000 (0:00:00.071) 0:00:40.140 ****** 2026-02-17 03:14:31.947893 | orchestrator | 2026-02-17 03:14:31.947903 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 03:14:31.947914 | orchestrator | Tuesday 17 February 2026 03:13:12 +0000 (0:00:00.069) 0:00:40.209 ****** 2026-02-17 03:14:31.947925 | orchestrator | 2026-02-17 03:14:31.947935 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 03:14:31.947960 | orchestrator | Tuesday 17 February 2026 03:13:12 +0000 (0:00:00.247) 0:00:40.457 ****** 2026-02-17 03:14:31.947971 | orchestrator | 2026-02-17 03:14:31.947982 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 03:14:31.947993 | orchestrator | Tuesday 17 February 2026 03:13:12 +0000 (0:00:00.066) 0:00:40.524 ****** 2026-02-17 03:14:31.948003 | orchestrator | 2026-02-17 03:14:31.948015 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 03:14:31.948048 | orchestrator | Tuesday 17 February 2026 03:13:12 +0000 (0:00:00.062) 0:00:40.586 ****** 2026-02-17 03:14:31.948060 | orchestrator | 2026-02-17 03:14:31.948071 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-17 03:14:31.948081 | orchestrator | Tuesday 17 February 2026 03:13:12 +0000 (0:00:00.097) 0:00:40.684 ****** 2026-02-17 03:14:31.948092 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:14:31.948103 | orchestrator | changed: [testbed-manager] 2026-02-17 03:14:31.948114 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:14:31.948125 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:14:31.948136 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:14:31.948147 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:14:31.948157 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:14:31.948168 | orchestrator | 2026-02-17 03:14:31.948179 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-17 03:14:31.948189 | orchestrator | Tuesday 17 February 2026 03:13:46 +0000 (0:00:33.738) 0:01:14.422 ****** 2026-02-17 03:14:31.948200 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:14:31.948211 | orchestrator | changed: [testbed-manager] 2026-02-17 03:14:31.948222 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:14:31.948233 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:14:31.948243 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:14:31.948254 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:14:31.948264 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:14:31.948275 | orchestrator | 2026-02-17 03:14:31.948286 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-17 03:14:31.948297 | orchestrator | Tuesday 17 February 2026 03:14:20 +0000 (0:00:34.240) 0:01:48.663 ****** 2026-02-17 03:14:31.948308 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:14:31.948319 | orchestrator | ok: [testbed-manager] 2026-02-17 03:14:31.948330 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:14:31.948340 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:14:31.948351 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:14:31.948362 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:14:31.948373 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:14:31.948383 | orchestrator | 2026-02-17 03:14:31.948394 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-17 03:14:31.948405 | orchestrator | Tuesday 17 February 2026 03:14:23 +0000 (0:00:02.896) 0:01:51.559 ****** 2026-02-17 03:14:31.948416 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:14:31.948427 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:14:31.948438 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:14:31.948448 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:14:31.948459 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:14:31.948470 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:14:31.948481 | orchestrator | changed: [testbed-manager] 2026-02-17 03:14:31.948491 | orchestrator | 2026-02-17 03:14:31.948502 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:14:31.948515 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 03:14:31.948527 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 03:14:31.948548 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 03:14:31.948567 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 03:14:31.948578 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 03:14:31.948589 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 03:14:31.948599 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 03:14:31.948610 | orchestrator | 2026-02-17 03:14:31.948621 | orchestrator | 2026-02-17 03:14:31.948632 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:14:31.948643 | orchestrator | Tuesday 17 February 2026 03:14:31 +0000 (0:00:08.336) 0:01:59.896 ****** 2026-02-17 03:14:31.948654 | orchestrator | =============================================================================== 2026-02-17 03:14:31.948665 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.24s 2026-02-17 03:14:31.948676 | orchestrator | common : Restart fluentd container ------------------------------------- 33.74s 2026-02-17 03:14:31.948687 | orchestrator | common : Restart cron container ----------------------------------------- 8.34s 2026-02-17 03:14:31.948697 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.54s 2026-02-17 03:14:31.948708 | orchestrator | common : Copying over config.json files for services -------------------- 3.40s 2026-02-17 03:14:31.948719 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.90s 2026-02-17 03:14:31.948730 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.83s 2026-02-17 03:14:31.948740 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.71s 2026-02-17 03:14:31.948751 | orchestrator | common : Check common containers ---------------------------------------- 2.65s 2026-02-17 03:14:31.948762 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.25s 2026-02-17 03:14:31.948773 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.15s 2026-02-17 03:14:31.948790 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.13s 2026-02-17 03:14:32.406772 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.96s 2026-02-17 03:14:32.406952 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.92s 2026-02-17 03:14:32.406968 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.63s 2026-02-17 03:14:32.406981 | orchestrator | common : Creating log volume -------------------------------------------- 1.51s 2026-02-17 03:14:32.406992 | orchestrator | common : include_tasks -------------------------------------------------- 1.50s 2026-02-17 03:14:32.407003 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-02-17 03:14:32.407020 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.33s 2026-02-17 03:14:32.407039 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.15s 2026-02-17 03:14:34.956135 | orchestrator | 2026-02-17 03:14:34 | INFO  | Task 348ed320-b7c1-48ed-a526-67d5d0391f82 (loadbalancer) was prepared for execution. 2026-02-17 03:14:34.956225 | orchestrator | 2026-02-17 03:14:34 | INFO  | It takes a moment until task 348ed320-b7c1-48ed-a526-67d5d0391f82 (loadbalancer) has been started and output is visible here. 2026-02-17 03:14:49.490197 | orchestrator | 2026-02-17 03:14:49.490325 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:14:49.490413 | orchestrator | 2026-02-17 03:14:49.490428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:14:49.490469 | orchestrator | Tuesday 17 February 2026 03:14:39 +0000 (0:00:00.272) 0:00:00.272 ****** 2026-02-17 03:14:49.490482 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:14:49.490497 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:14:49.490508 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:14:49.490520 | orchestrator | 2026-02-17 03:14:49.490533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:14:49.490546 | orchestrator | Tuesday 17 February 2026 03:14:39 +0000 (0:00:00.311) 0:00:00.584 ****** 2026-02-17 03:14:49.490559 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-17 03:14:49.490570 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-17 03:14:49.490581 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-17 03:14:49.490593 | orchestrator | 2026-02-17 03:14:49.490605 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-17 03:14:49.490615 | orchestrator | 2026-02-17 03:14:49.490626 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-17 03:14:49.490638 | orchestrator | Tuesday 17 February 2026 03:14:40 +0000 (0:00:00.502) 0:00:01.087 ****** 2026-02-17 03:14:49.490667 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:14:49.490679 | orchestrator | 2026-02-17 03:14:49.490691 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-17 03:14:49.490702 | orchestrator | Tuesday 17 February 2026 03:14:41 +0000 (0:00:00.605) 0:00:01.693 ****** 2026-02-17 03:14:49.490714 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:14:49.490726 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:14:49.490737 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:14:49.490749 | orchestrator | 2026-02-17 03:14:49.490760 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-17 03:14:49.490773 | orchestrator | Tuesday 17 February 2026 03:14:41 +0000 (0:00:00.616) 0:00:02.309 ****** 2026-02-17 03:14:49.490786 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:14:49.490802 | orchestrator | 2026-02-17 03:14:49.490817 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-17 03:14:49.490872 | orchestrator | Tuesday 17 February 2026 03:14:42 +0000 (0:00:00.721) 0:00:03.030 ****** 2026-02-17 03:14:49.490886 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:14:49.490895 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:14:49.490904 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:14:49.490915 | orchestrator | 2026-02-17 03:14:49.490924 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-17 03:14:49.490935 | orchestrator | Tuesday 17 February 2026 03:14:42 +0000 (0:00:00.626) 0:00:03.657 ****** 2026-02-17 03:14:49.490946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-17 03:14:49.490956 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-17 03:14:49.490968 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-17 03:14:49.490979 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-17 03:14:49.490990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-17 03:14:49.491001 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-17 03:14:49.491012 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-17 03:14:49.491026 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-17 03:14:49.491038 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-17 03:14:49.491048 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-17 03:14:49.491072 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-17 03:14:49.491083 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-17 03:14:49.491095 | orchestrator | 2026-02-17 03:14:49.491106 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-17 03:14:49.491117 | orchestrator | Tuesday 17 February 2026 03:14:45 +0000 (0:00:02.148) 0:00:05.805 ****** 2026-02-17 03:14:49.491129 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-17 03:14:49.491140 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-17 03:14:49.491152 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-17 03:14:49.491163 | orchestrator | 2026-02-17 03:14:49.491176 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-17 03:14:49.491189 | orchestrator | Tuesday 17 February 2026 03:14:45 +0000 (0:00:00.695) 0:00:06.500 ****** 2026-02-17 03:14:49.491200 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-17 03:14:49.491211 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-17 03:14:49.491223 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-17 03:14:49.491234 | orchestrator | 2026-02-17 03:14:49.491246 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-17 03:14:49.491253 | orchestrator | Tuesday 17 February 2026 03:14:47 +0000 (0:00:01.295) 0:00:07.796 ****** 2026-02-17 03:14:49.491260 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-17 03:14:49.491266 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:14:49.491295 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-17 03:14:49.491302 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:14:49.491309 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-17 03:14:49.491315 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:14:49.491322 | orchestrator | 2026-02-17 03:14:49.491329 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-17 03:14:49.491335 | orchestrator | Tuesday 17 February 2026 03:14:47 +0000 (0:00:00.551) 0:00:08.348 ****** 2026-02-17 03:14:49.491344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 03:14:49.491365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 03:14:49.491373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 03:14:49.491387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:14:49.491394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:14:49.491407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:14:54.866713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:14:54.866950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:14:54.866974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:14:54.866988 | orchestrator | 2026-02-17 03:14:54.867001 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-17 03:14:54.867014 | orchestrator | Tuesday 17 February 2026 03:14:49 +0000 (0:00:01.808) 0:00:10.157 ****** 2026-02-17 03:14:54.867048 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:14:54.867065 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:14:54.867084 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:14:54.867102 | orchestrator | 2026-02-17 03:14:54.867121 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-17 03:14:54.867142 | orchestrator | Tuesday 17 February 2026 03:14:50 +0000 (0:00:00.919) 0:00:11.077 ****** 2026-02-17 03:14:54.867163 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-17 03:14:54.867185 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-17 03:14:54.867206 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-17 03:14:54.867228 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-17 03:14:54.867241 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-17 03:14:54.867251 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-17 03:14:54.867262 | orchestrator | 2026-02-17 03:14:54.867273 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-17 03:14:54.867284 | orchestrator | Tuesday 17 February 2026 03:14:51 +0000 (0:00:01.493) 0:00:12.571 ****** 2026-02-17 03:14:54.867295 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:14:54.867306 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:14:54.867317 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:14:54.867328 | orchestrator | 2026-02-17 03:14:54.867339 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-17 03:14:54.867350 | orchestrator | Tuesday 17 February 2026 03:14:52 +0000 (0:00:00.924) 0:00:13.495 ****** 2026-02-17 03:14:54.867361 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:14:54.867372 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:14:54.867383 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:14:54.867393 | orchestrator | 2026-02-17 03:14:54.867404 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-17 03:14:54.867415 | orchestrator | Tuesday 17 February 2026 03:14:54 +0000 (0:00:01.377) 0:00:14.873 ****** 2026-02-17 03:14:54.867427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:14:54.867466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:14:54.867487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:14:54.867509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 03:14:54.867543 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:14:54.867564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:14:54.867623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:14:54.867637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:14:54.867649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 03:14:54.867660 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:14:54.867683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:14:57.664303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:14:57.664435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:14:57.664451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 03:14:57.664464 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:14:57.664478 | orchestrator | 2026-02-17 03:14:57.664490 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-17 03:14:57.664502 | orchestrator | Tuesday 17 February 2026 03:14:54 +0000 (0:00:00.665) 0:00:15.539 ****** 2026-02-17 03:14:57.664514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 03:14:57.664526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 03:14:57.664538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 03:14:57.664593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:14:57.664607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:14:57.664618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 03:14:57.664630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:14:57.664642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:14:57.664653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 03:14:57.664685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:06.252177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:06.252288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00', '__omit_place_holder__2ff347619b2af98af63ff84e18651220d3ca8d00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 03:15:06.252303 | orchestrator | 2026-02-17 03:15:06.252314 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-17 03:15:06.252323 | orchestrator | Tuesday 17 February 2026 03:14:57 +0000 (0:00:02.793) 0:00:18.332 ****** 2026-02-17 03:15:06.252332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:06.252342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:06.252351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:06.252379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:06.252417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:06.252439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:06.252448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:06.252456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:06.252465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:06.252473 | orchestrator | 2026-02-17 03:15:06.252481 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-17 03:15:06.252497 | orchestrator | Tuesday 17 February 2026 03:15:00 +0000 (0:00:03.106) 0:00:21.439 ****** 2026-02-17 03:15:06.252514 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-17 03:15:06.252523 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-17 03:15:06.252531 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-17 03:15:06.252539 | orchestrator | 2026-02-17 03:15:06.252547 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-17 03:15:06.252556 | orchestrator | Tuesday 17 February 2026 03:15:02 +0000 (0:00:01.975) 0:00:23.415 ****** 2026-02-17 03:15:06.252564 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-17 03:15:06.252572 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-17 03:15:06.252580 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-17 03:15:06.252588 | orchestrator | 2026-02-17 03:15:06.252596 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-17 03:15:06.252604 | orchestrator | Tuesday 17 February 2026 03:15:05 +0000 (0:00:02.932) 0:00:26.347 ****** 2026-02-17 03:15:06.252612 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:06.252621 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:06.252629 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:06.252638 | orchestrator | 2026-02-17 03:15:06.252652 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-17 03:15:18.121482 | orchestrator | Tuesday 17 February 2026 03:15:06 +0000 (0:00:00.581) 0:00:26.928 ****** 2026-02-17 03:15:18.121571 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-17 03:15:18.121588 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-17 03:15:18.121597 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-17 03:15:18.121606 | orchestrator | 2026-02-17 03:15:18.121615 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-17 03:15:18.121624 | orchestrator | Tuesday 17 February 2026 03:15:08 +0000 (0:00:02.180) 0:00:29.109 ****** 2026-02-17 03:15:18.121632 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-17 03:15:18.121644 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-17 03:15:18.121655 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-17 03:15:18.121662 | orchestrator | 2026-02-17 03:15:18.121670 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-17 03:15:18.121678 | orchestrator | Tuesday 17 February 2026 03:15:10 +0000 (0:00:02.170) 0:00:31.280 ****** 2026-02-17 03:15:18.121687 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-17 03:15:18.121695 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-17 03:15:18.121704 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-17 03:15:18.121727 | orchestrator | 2026-02-17 03:15:18.121756 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-17 03:15:18.121765 | orchestrator | Tuesday 17 February 2026 03:15:12 +0000 (0:00:01.531) 0:00:32.811 ****** 2026-02-17 03:15:18.121774 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-17 03:15:18.121783 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-17 03:15:18.121791 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-17 03:15:18.121799 | orchestrator | 2026-02-17 03:15:18.121829 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-17 03:15:18.121873 | orchestrator | Tuesday 17 February 2026 03:15:13 +0000 (0:00:01.527) 0:00:34.339 ****** 2026-02-17 03:15:18.121883 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:15:18.121891 | orchestrator | 2026-02-17 03:15:18.121900 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-17 03:15:18.121908 | orchestrator | Tuesday 17 February 2026 03:15:14 +0000 (0:00:00.551) 0:00:34.891 ****** 2026-02-17 03:15:18.121917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:18.121929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:18.121941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:18.121968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:18.121978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:18.121986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:18.122003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:18.122009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:18.122068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:18.122076 | orchestrator | 2026-02-17 03:15:18.122083 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-17 03:15:18.122088 | orchestrator | Tuesday 17 February 2026 03:15:17 +0000 (0:00:03.307) 0:00:38.198 ****** 2026-02-17 03:15:18.122104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:15:18.938450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:18.938545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:18.938578 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:18.938590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:15:18.938600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:18.938609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:18.938617 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:18.938625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:15:18.938665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:18.938675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:18.938690 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:18.938698 | orchestrator | 2026-02-17 03:15:18.938707 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-17 03:15:18.938717 | orchestrator | Tuesday 17 February 2026 03:15:18 +0000 (0:00:00.599) 0:00:38.798 ****** 2026-02-17 03:15:18.938726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:15:18.938734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:18.938743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:18.938751 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:18.938759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:15:18.938777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:19.828737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:19.828967 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:19.828992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:15:19.829006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:19.829018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:19.829029 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:19.829041 | orchestrator | 2026-02-17 03:15:19.829053 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-17 03:15:19.829065 | orchestrator | Tuesday 17 February 2026 03:15:18 +0000 (0:00:00.810) 0:00:39.608 ****** 2026-02-17 03:15:19.829077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:15:19.829089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:19.829121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:19.829142 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:19.829154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:15:19.829165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:19.829177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:19.829188 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:19.829199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:15:19.829227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:19.829245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:19.829272 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:21.211956 | orchestrator | 2026-02-17 03:15:21.212025 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-17 03:15:21.212033 | orchestrator | Tuesday 17 February 2026 03:15:19 +0000 (0:00:00.888) 0:00:40.497 ****** 2026-02-17 03:15:21.212040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:15:21.212047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:21.212052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:21.212057 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:21.212062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:15:21.212066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:21.212087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:21.212107 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:21.212123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:15:21.212128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:21.212133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:21.212138 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:21.212144 | orchestrator | 2026-02-17 03:15:21.212150 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-17 03:15:21.212156 | orchestrator | Tuesday 17 February 2026 03:15:20 +0000 (0:00:00.591) 0:00:41.088 ****** 2026-02-17 03:15:21.212162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:15:21.212168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:21.212183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:21.212189 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:21.212205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:15:22.300724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:22.300815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:22.300826 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:22.300835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:15:22.300857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:22.300864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:22.300894 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:22.300901 | orchestrator | 2026-02-17 03:15:22.300909 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-17 03:15:22.300918 | orchestrator | Tuesday 17 February 2026 03:15:21 +0000 (0:00:00.797) 0:00:41.886 ****** 2026-02-17 03:15:22.300939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:15:22.300962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:22.300970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:22.300977 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:22.300984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:15:22.300991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:22.301004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:22.301011 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:22.301022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:15:22.301033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:23.732188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:23.732299 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:23.732316 | orchestrator | 2026-02-17 03:15:23.732330 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-17 03:15:23.732342 | orchestrator | Tuesday 17 February 2026 03:15:22 +0000 (0:00:01.083) 0:00:42.969 ****** 2026-02-17 03:15:23.732356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:15:23.732370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:23.732405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:23.732419 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:23.732431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:15:23.732463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:23.732517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:23.732543 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:23.732563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:15:23.732581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:23.732613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:23.732633 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:23.732653 | orchestrator | 2026-02-17 03:15:23.732670 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-17 03:15:23.732689 | orchestrator | Tuesday 17 February 2026 03:15:22 +0000 (0:00:00.581) 0:00:43.551 ****** 2026-02-17 03:15:23.732708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 03:15:23.732728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:23.732773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:30.172900 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:30.173002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 03:15:30.173016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:30.173042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:30.173050 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:30.173058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 03:15:30.173066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 03:15:30.173085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 03:15:30.173093 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:30.173100 | orchestrator | 2026-02-17 03:15:30.173108 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-17 03:15:30.173116 | orchestrator | Tuesday 17 February 2026 03:15:23 +0000 (0:00:00.855) 0:00:44.406 ****** 2026-02-17 03:15:30.173123 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-17 03:15:30.173145 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-17 03:15:30.173152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-17 03:15:30.173159 | orchestrator | 2026-02-17 03:15:30.173166 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-17 03:15:30.173173 | orchestrator | Tuesday 17 February 2026 03:15:25 +0000 (0:00:01.685) 0:00:46.091 ****** 2026-02-17 03:15:30.173181 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-17 03:15:30.173188 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-17 03:15:30.173194 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-17 03:15:30.173201 | orchestrator | 2026-02-17 03:15:30.173214 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-17 03:15:30.173221 | orchestrator | Tuesday 17 February 2026 03:15:27 +0000 (0:00:01.658) 0:00:47.750 ****** 2026-02-17 03:15:30.173228 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 03:15:30.173234 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 03:15:30.173241 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 03:15:30.173248 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 03:15:30.173254 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:30.173261 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 03:15:30.173268 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:30.173274 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 03:15:30.173281 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:30.173288 | orchestrator | 2026-02-17 03:15:30.173294 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-17 03:15:30.173301 | orchestrator | Tuesday 17 February 2026 03:15:27 +0000 (0:00:00.793) 0:00:48.544 ****** 2026-02-17 03:15:30.173308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:30.173316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:30.173328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 03:15:30.173342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:34.657611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:34.657696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 03:15:34.657707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:34.657715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:34.657721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 03:15:34.657728 | orchestrator | 2026-02-17 03:15:34.657735 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-17 03:15:34.657757 | orchestrator | Tuesday 17 February 2026 03:15:30 +0000 (0:00:02.307) 0:00:50.851 ****** 2026-02-17 03:15:34.657765 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:15:34.657771 | orchestrator | 2026-02-17 03:15:34.657778 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-17 03:15:34.657784 | orchestrator | Tuesday 17 February 2026 03:15:31 +0000 (0:00:00.885) 0:00:51.736 ****** 2026-02-17 03:15:34.657805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 03:15:34.657832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 03:15:34.657840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:34.657914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 03:15:34.657926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 03:15:34.657944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 03:15:34.657960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:34.657988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 03:15:35.422426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 03:15:35.422536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 03:15:35.422552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:35.422581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 03:15:35.422595 | orchestrator | 2026-02-17 03:15:35.422607 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-17 03:15:35.422623 | orchestrator | Tuesday 17 February 2026 03:15:34 +0000 (0:00:03.595) 0:00:55.332 ****** 2026-02-17 03:15:35.422673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 03:15:35.422719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 03:15:35.422740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:35.422760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 03:15:35.422777 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:35.422797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 03:15:35.422824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 03:15:35.422930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:35.422954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 03:15:35.422973 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:35.423014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 03:15:44.334754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 03:15:44.334944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.334964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.335002 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:44.335017 | orchestrator | 2026-02-17 03:15:44.335029 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-17 03:15:44.335042 | orchestrator | Tuesday 17 February 2026 03:15:35 +0000 (0:00:00.766) 0:00:56.098 ****** 2026-02-17 03:15:44.335054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-17 03:15:44.335068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-17 03:15:44.335081 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:44.335110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-17 03:15:44.335122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-17 03:15:44.335133 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:44.335144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-17 03:15:44.335155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-17 03:15:44.335166 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:44.335177 | orchestrator | 2026-02-17 03:15:44.335188 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-17 03:15:44.335199 | orchestrator | Tuesday 17 February 2026 03:15:36 +0000 (0:00:01.291) 0:00:57.390 ****** 2026-02-17 03:15:44.335210 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:15:44.335221 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:15:44.335232 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:15:44.335242 | orchestrator | 2026-02-17 03:15:44.335254 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-17 03:15:44.335267 | orchestrator | Tuesday 17 February 2026 03:15:38 +0000 (0:00:01.336) 0:00:58.727 ****** 2026-02-17 03:15:44.335279 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:15:44.335291 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:15:44.335304 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:15:44.335317 | orchestrator | 2026-02-17 03:15:44.335329 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-17 03:15:44.335340 | orchestrator | Tuesday 17 February 2026 03:15:40 +0000 (0:00:02.136) 0:01:00.863 ****** 2026-02-17 03:15:44.335354 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:15:44.335366 | orchestrator | 2026-02-17 03:15:44.335401 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-17 03:15:44.335414 | orchestrator | Tuesday 17 February 2026 03:15:40 +0000 (0:00:00.710) 0:01:01.573 ****** 2026-02-17 03:15:44.335430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 03:15:44.335463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.335478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.335493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 03:15:44.335506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.335528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 03:15:44.962265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.962379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.962394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.962404 | orchestrator | 2026-02-17 03:15:44.962415 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-17 03:15:44.962426 | orchestrator | Tuesday 17 February 2026 03:15:44 +0000 (0:00:03.429) 0:01:05.003 ****** 2026-02-17 03:15:44.962436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 03:15:44.962446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.962493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.962504 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:44.962521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 03:15:44.962531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.962540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.962549 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:44.962558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 03:15:44.962574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 03:15:44.962590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:15:54.948958 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:54.949089 | orchestrator | 2026-02-17 03:15:54.949114 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-17 03:15:54.949134 | orchestrator | Tuesday 17 February 2026 03:15:44 +0000 (0:00:00.630) 0:01:05.633 ****** 2026-02-17 03:15:54.949173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-17 03:15:54.949192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-17 03:15:54.949208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-17 03:15:54.949225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-17 03:15:54.949242 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:54.949259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-17 03:15:54.949274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-17 03:15:54.949291 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:54.949307 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:54.949322 | orchestrator | 2026-02-17 03:15:54.949337 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-17 03:15:54.949352 | orchestrator | Tuesday 17 February 2026 03:15:45 +0000 (0:00:00.832) 0:01:06.466 ****** 2026-02-17 03:15:54.949366 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:15:54.949381 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:15:54.949396 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:15:54.949411 | orchestrator | 2026-02-17 03:15:54.949427 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-17 03:15:54.949442 | orchestrator | Tuesday 17 February 2026 03:15:47 +0000 (0:00:01.521) 0:01:07.987 ****** 2026-02-17 03:15:54.949491 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:15:54.949507 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:15:54.949522 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:15:54.949537 | orchestrator | 2026-02-17 03:15:54.949551 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-17 03:15:54.949568 | orchestrator | Tuesday 17 February 2026 03:15:49 +0000 (0:00:02.230) 0:01:10.218 ****** 2026-02-17 03:15:54.949584 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:54.949601 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:54.949617 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:54.949632 | orchestrator | 2026-02-17 03:15:54.949649 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-17 03:15:54.949665 | orchestrator | Tuesday 17 February 2026 03:15:49 +0000 (0:00:00.318) 0:01:10.537 ****** 2026-02-17 03:15:54.949682 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:15:54.949699 | orchestrator | 2026-02-17 03:15:54.949716 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-17 03:15:54.949732 | orchestrator | Tuesday 17 February 2026 03:15:50 +0000 (0:00:00.765) 0:01:11.302 ****** 2026-02-17 03:15:54.949755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-17 03:15:54.949815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-17 03:15:54.949833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-17 03:15:54.949881 | orchestrator | 2026-02-17 03:15:54.949901 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-17 03:15:54.949919 | orchestrator | Tuesday 17 February 2026 03:15:53 +0000 (0:00:02.834) 0:01:14.137 ****** 2026-02-17 03:15:54.949951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-17 03:15:54.949962 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:15:54.949972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-17 03:15:54.949982 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:15:54.949992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-17 03:15:54.950010 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:15:54.950103 | orchestrator | 2026-02-17 03:15:54.950138 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-17 03:16:02.938215 | orchestrator | Tuesday 17 February 2026 03:15:54 +0000 (0:00:01.481) 0:01:15.618 ****** 2026-02-17 03:16:02.938319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 03:16:02.938334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 03:16:02.938342 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:02.938362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 03:16:02.938367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 03:16:02.938371 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:02.938375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 03:16:02.938379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 03:16:02.938384 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:02.938388 | orchestrator | 2026-02-17 03:16:02.938392 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-17 03:16:02.938397 | orchestrator | Tuesday 17 February 2026 03:15:56 +0000 (0:00:01.835) 0:01:17.454 ****** 2026-02-17 03:16:02.938401 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:02.938405 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:02.938409 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:02.938413 | orchestrator | 2026-02-17 03:16:02.938419 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-17 03:16:02.938423 | orchestrator | Tuesday 17 February 2026 03:15:57 +0000 (0:00:00.464) 0:01:17.918 ****** 2026-02-17 03:16:02.938427 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:02.938431 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:02.938435 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:02.938439 | orchestrator | 2026-02-17 03:16:02.938443 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-17 03:16:02.938447 | orchestrator | Tuesday 17 February 2026 03:15:58 +0000 (0:00:01.390) 0:01:19.309 ****** 2026-02-17 03:16:02.938451 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:16:02.938455 | orchestrator | 2026-02-17 03:16:02.938460 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-17 03:16:02.938463 | orchestrator | Tuesday 17 February 2026 03:15:59 +0000 (0:00:00.956) 0:01:20.266 ****** 2026-02-17 03:16:02.938484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 03:16:02.938496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:16:02.938502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 03:16:02.938507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 03:16:02.938511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 03:16:02.938516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:16:02.938528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 03:16:03.643347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 03:16:03.643428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 03:16:03.643440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:16:03.643447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 03:16:03.643454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 03:16:03.643479 | orchestrator | 2026-02-17 03:16:03.643498 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-17 03:16:03.643507 | orchestrator | Tuesday 17 February 2026 03:16:03 +0000 (0:00:03.435) 0:01:23.702 ****** 2026-02-17 03:16:03.643528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 03:16:03.643536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:16:03.643543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 03:16:03.643549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 03:16:03.643556 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:03.643564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 03:16:03.643585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.664998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.665087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.665095 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:13.665103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 03:16:13.665109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.665144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.665163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.665168 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:13.665173 | orchestrator | 2026-02-17 03:16:13.665178 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-17 03:16:13.665183 | orchestrator | Tuesday 17 February 2026 03:16:03 +0000 (0:00:00.717) 0:01:24.419 ****** 2026-02-17 03:16:13.665189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-17 03:16:13.665196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-17 03:16:13.665202 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:13.665207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-17 03:16:13.665211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-17 03:16:13.665215 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:13.665220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-17 03:16:13.665224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-17 03:16:13.665229 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:13.665233 | orchestrator | 2026-02-17 03:16:13.665238 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-17 03:16:13.665242 | orchestrator | Tuesday 17 February 2026 03:16:04 +0000 (0:00:01.214) 0:01:25.633 ****** 2026-02-17 03:16:13.665247 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:16:13.665255 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:16:13.665260 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:16:13.665264 | orchestrator | 2026-02-17 03:16:13.665269 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-17 03:16:13.665273 | orchestrator | Tuesday 17 February 2026 03:16:06 +0000 (0:00:01.329) 0:01:26.963 ****** 2026-02-17 03:16:13.665277 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:16:13.665282 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:16:13.665287 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:16:13.665291 | orchestrator | 2026-02-17 03:16:13.665295 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-17 03:16:13.665300 | orchestrator | Tuesday 17 February 2026 03:16:08 +0000 (0:00:02.357) 0:01:29.320 ****** 2026-02-17 03:16:13.665304 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:13.665309 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:13.665313 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:13.665317 | orchestrator | 2026-02-17 03:16:13.665321 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-17 03:16:13.665326 | orchestrator | Tuesday 17 February 2026 03:16:08 +0000 (0:00:00.325) 0:01:29.646 ****** 2026-02-17 03:16:13.665330 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:13.665335 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:13.665339 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:13.665343 | orchestrator | 2026-02-17 03:16:13.665348 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-17 03:16:13.665352 | orchestrator | Tuesday 17 February 2026 03:16:09 +0000 (0:00:00.319) 0:01:29.965 ****** 2026-02-17 03:16:13.665356 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:16:13.665361 | orchestrator | 2026-02-17 03:16:13.665365 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-17 03:16:13.665369 | orchestrator | Tuesday 17 February 2026 03:16:10 +0000 (0:00:01.020) 0:01:30.986 ****** 2026-02-17 03:16:13.665383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 03:16:13.910188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 03:16:13.910323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 03:16:13.910381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 03:16:13.910424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:16:13.910608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 03:16:14.540298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 03:16:14.540310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540384 | orchestrator | 2026-02-17 03:16:14.540392 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-17 03:16:14.540400 | orchestrator | Tuesday 17 February 2026 03:16:13 +0000 (0:00:03.601) 0:01:34.588 ****** 2026-02-17 03:16:14.540407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 03:16:14.540415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 03:16:14.540422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 03:16:14.540449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 03:16:15.042411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 03:16:15.042427 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:15.042773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042837 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:15.042842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 03:16:15.042848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 03:16:15.042853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 03:16:15.042942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 03:16:25.164100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:16:25.164178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 03:16:25.164185 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:25.164192 | orchestrator | 2026-02-17 03:16:25.164197 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-17 03:16:25.164201 | orchestrator | Tuesday 17 February 2026 03:16:15 +0000 (0:00:01.128) 0:01:35.717 ****** 2026-02-17 03:16:25.164207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-17 03:16:25.164213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-17 03:16:25.164218 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:25.164222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-17 03:16:25.164226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-17 03:16:25.164230 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:25.164234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-17 03:16:25.164253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-17 03:16:25.164257 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:25.164261 | orchestrator | 2026-02-17 03:16:25.164265 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-17 03:16:25.164269 | orchestrator | Tuesday 17 February 2026 03:16:16 +0000 (0:00:01.310) 0:01:37.027 ****** 2026-02-17 03:16:25.164273 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:16:25.164277 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:16:25.164281 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:16:25.164285 | orchestrator | 2026-02-17 03:16:25.164289 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-17 03:16:25.164293 | orchestrator | Tuesday 17 February 2026 03:16:17 +0000 (0:00:01.314) 0:01:38.341 ****** 2026-02-17 03:16:25.164296 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:16:25.164300 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:16:25.164304 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:16:25.164308 | orchestrator | 2026-02-17 03:16:25.164311 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-17 03:16:25.164315 | orchestrator | Tuesday 17 February 2026 03:16:19 +0000 (0:00:02.017) 0:01:40.359 ****** 2026-02-17 03:16:25.164319 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:25.164323 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:25.164327 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:25.164330 | orchestrator | 2026-02-17 03:16:25.164334 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-17 03:16:25.164338 | orchestrator | Tuesday 17 February 2026 03:16:20 +0000 (0:00:00.334) 0:01:40.694 ****** 2026-02-17 03:16:25.164342 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:16:25.164346 | orchestrator | 2026-02-17 03:16:25.164349 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-17 03:16:25.164353 | orchestrator | Tuesday 17 February 2026 03:16:21 +0000 (0:00:01.164) 0:01:41.858 ****** 2026-02-17 03:16:25.164373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 03:16:25.164379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 03:16:25.164394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 03:16:28.608325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 03:16:28.608526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 03:16:28.608591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 03:16:28.608664 | orchestrator | 2026-02-17 03:16:28.608681 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-17 03:16:28.608694 | orchestrator | Tuesday 17 February 2026 03:16:25 +0000 (0:00:04.158) 0:01:46.017 ****** 2026-02-17 03:16:28.608714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 03:16:28.608738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 03:16:32.582191 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:32.582278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 03:16:32.582304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 03:16:32.582329 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:32.582350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 03:16:32.582361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 03:16:32.582374 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:32.582380 | orchestrator | 2026-02-17 03:16:32.582387 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-17 03:16:32.582394 | orchestrator | Tuesday 17 February 2026 03:16:28 +0000 (0:00:03.374) 0:01:49.391 ****** 2026-02-17 03:16:32.582402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 03:16:32.582416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 03:16:41.067674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 03:16:41.067784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 03:16:41.067799 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:41.067813 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:41.067823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 03:16:41.067851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 03:16:41.067862 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:41.067916 | orchestrator | 2026-02-17 03:16:41.067929 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-17 03:16:41.067940 | orchestrator | Tuesday 17 February 2026 03:16:32 +0000 (0:00:03.856) 0:01:53.247 ****** 2026-02-17 03:16:41.067971 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:16:41.067982 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:16:41.067992 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:16:41.068002 | orchestrator | 2026-02-17 03:16:41.068012 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-17 03:16:41.068022 | orchestrator | Tuesday 17 February 2026 03:16:33 +0000 (0:00:01.383) 0:01:54.631 ****** 2026-02-17 03:16:41.068032 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:16:41.068042 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:16:41.068052 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:16:41.068061 | orchestrator | 2026-02-17 03:16:41.068071 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-17 03:16:41.068081 | orchestrator | Tuesday 17 February 2026 03:16:36 +0000 (0:00:02.079) 0:01:56.710 ****** 2026-02-17 03:16:41.068091 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:41.068101 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:41.068110 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:41.068120 | orchestrator | 2026-02-17 03:16:41.068130 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-17 03:16:41.068140 | orchestrator | Tuesday 17 February 2026 03:16:36 +0000 (0:00:00.312) 0:01:57.023 ****** 2026-02-17 03:16:41.068150 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:16:41.068159 | orchestrator | 2026-02-17 03:16:41.068169 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-17 03:16:41.068179 | orchestrator | Tuesday 17 February 2026 03:16:37 +0000 (0:00:01.092) 0:01:58.116 ****** 2026-02-17 03:16:41.068206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 03:16:41.068220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 03:16:41.068233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 03:16:41.068245 | orchestrator | 2026-02-17 03:16:41.068256 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-17 03:16:41.068275 | orchestrator | Tuesday 17 February 2026 03:16:40 +0000 (0:00:03.002) 0:02:01.118 ****** 2026-02-17 03:16:41.068288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 03:16:41.068300 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:41.068312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 03:16:41.068324 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:41.068335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 03:16:41.068417 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:41.068436 | orchestrator | 2026-02-17 03:16:41.068447 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-17 03:16:41.068459 | orchestrator | Tuesday 17 February 2026 03:16:40 +0000 (0:00:00.408) 0:02:01.527 ****** 2026-02-17 03:16:41.068479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-17 03:16:50.017224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-17 03:16:50.017347 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:50.017372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-17 03:16:50.017390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-17 03:16:50.017405 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:50.017420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-17 03:16:50.017436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-17 03:16:50.017478 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:50.017489 | orchestrator | 2026-02-17 03:16:50.017499 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-17 03:16:50.017510 | orchestrator | Tuesday 17 February 2026 03:16:41 +0000 (0:00:00.945) 0:02:02.472 ****** 2026-02-17 03:16:50.017518 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:16:50.017528 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:16:50.017536 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:16:50.017545 | orchestrator | 2026-02-17 03:16:50.017554 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-17 03:16:50.017563 | orchestrator | Tuesday 17 February 2026 03:16:43 +0000 (0:00:01.310) 0:02:03.782 ****** 2026-02-17 03:16:50.017572 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:16:50.017580 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:16:50.017590 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:16:50.017598 | orchestrator | 2026-02-17 03:16:50.017607 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-17 03:16:50.017630 | orchestrator | Tuesday 17 February 2026 03:16:45 +0000 (0:00:02.197) 0:02:05.979 ****** 2026-02-17 03:16:50.017639 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:50.017648 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:50.017657 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:50.017666 | orchestrator | 2026-02-17 03:16:50.017674 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-17 03:16:50.017683 | orchestrator | Tuesday 17 February 2026 03:16:45 +0000 (0:00:00.336) 0:02:06.315 ****** 2026-02-17 03:16:50.017692 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:16:50.017701 | orchestrator | 2026-02-17 03:16:50.017715 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-17 03:16:50.017729 | orchestrator | Tuesday 17 February 2026 03:16:46 +0000 (0:00:01.130) 0:02:07.446 ****** 2026-02-17 03:16:50.017778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 03:16:50.017823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 03:16:50.017854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 03:16:51.681973 | orchestrator | 2026-02-17 03:16:51.682106 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-17 03:16:51.682120 | orchestrator | Tuesday 17 February 2026 03:16:50 +0000 (0:00:03.246) 0:02:10.693 ****** 2026-02-17 03:16:51.682149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 03:16:51.682161 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:16:51.682189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 03:16:51.682223 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:16:51.682239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 03:16:51.682248 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:16:51.682257 | orchestrator | 2026-02-17 03:16:51.682265 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-17 03:16:51.682274 | orchestrator | Tuesday 17 February 2026 03:16:50 +0000 (0:00:00.667) 0:02:11.360 ****** 2026-02-17 03:16:51.682283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-17 03:16:51.682299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 03:16:51.682309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-17 03:16:51.682324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 03:17:00.649024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-17 03:17:00.649137 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:00.649155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-17 03:17:00.649171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 03:17:00.649212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-17 03:17:00.649230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 03:17:00.649249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-17 03:17:00.649265 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:00.649284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-17 03:17:00.649302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 03:17:00.649320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-17 03:17:00.649355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 03:17:00.649366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-17 03:17:00.649404 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:00.649435 | orchestrator | 2026-02-17 03:17:00.649452 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-17 03:17:00.649469 | orchestrator | Tuesday 17 February 2026 03:16:51 +0000 (0:00:00.995) 0:02:12.356 ****** 2026-02-17 03:17:00.649486 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:00.649503 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:00.649519 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:00.649534 | orchestrator | 2026-02-17 03:17:00.649552 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-17 03:17:00.649568 | orchestrator | Tuesday 17 February 2026 03:16:53 +0000 (0:00:01.633) 0:02:13.989 ****** 2026-02-17 03:17:00.649586 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:00.649602 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:00.649618 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:00.649633 | orchestrator | 2026-02-17 03:17:00.649649 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-17 03:17:00.649665 | orchestrator | Tuesday 17 February 2026 03:16:55 +0000 (0:00:02.154) 0:02:16.144 ****** 2026-02-17 03:17:00.649682 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:00.649698 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:00.649739 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:00.649757 | orchestrator | 2026-02-17 03:17:00.649773 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-17 03:17:00.649788 | orchestrator | Tuesday 17 February 2026 03:16:55 +0000 (0:00:00.328) 0:02:16.473 ****** 2026-02-17 03:17:00.649800 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:00.649812 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:00.649824 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:00.649834 | orchestrator | 2026-02-17 03:17:00.649846 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-17 03:17:00.649857 | orchestrator | Tuesday 17 February 2026 03:16:56 +0000 (0:00:00.319) 0:02:16.792 ****** 2026-02-17 03:17:00.649869 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:17:00.649905 | orchestrator | 2026-02-17 03:17:00.649916 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-17 03:17:00.649925 | orchestrator | Tuesday 17 February 2026 03:16:57 +0000 (0:00:01.236) 0:02:18.029 ****** 2026-02-17 03:17:00.649948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:17:00.649976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:17:00.649988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:17:00.650000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:17:00.650069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:17:01.280043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:17:01.280151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:17:01.280193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:17:01.280203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:17:01.280212 | orchestrator | 2026-02-17 03:17:01.280225 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-17 03:17:01.280238 | orchestrator | Tuesday 17 February 2026 03:17:00 +0000 (0:00:03.295) 0:02:21.324 ****** 2026-02-17 03:17:01.280271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:17:01.280293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:17:01.280304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:17:01.280324 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:01.280337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:17:01.280350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:17:01.280361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:17:01.280372 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:01.280395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:17:11.366066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:17:11.366155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:17:11.366163 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:11.366169 | orchestrator | 2026-02-17 03:17:11.366174 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-17 03:17:11.366180 | orchestrator | Tuesday 17 February 2026 03:17:01 +0000 (0:00:00.627) 0:02:21.952 ****** 2026-02-17 03:17:11.366185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-17 03:17:11.366191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-17 03:17:11.366196 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:11.366201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-17 03:17:11.366205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-17 03:17:11.366209 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:11.366213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-17 03:17:11.366217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-17 03:17:11.366221 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:11.366225 | orchestrator | 2026-02-17 03:17:11.366229 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-17 03:17:11.366232 | orchestrator | Tuesday 17 February 2026 03:17:02 +0000 (0:00:01.158) 0:02:23.111 ****** 2026-02-17 03:17:11.366236 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:11.366240 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:11.366258 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:11.366262 | orchestrator | 2026-02-17 03:17:11.366266 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-17 03:17:11.366270 | orchestrator | Tuesday 17 February 2026 03:17:03 +0000 (0:00:01.515) 0:02:24.626 ****** 2026-02-17 03:17:11.366274 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:11.366278 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:11.366282 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:11.366285 | orchestrator | 2026-02-17 03:17:11.366289 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-17 03:17:11.366293 | orchestrator | Tuesday 17 February 2026 03:17:06 +0000 (0:00:02.229) 0:02:26.855 ****** 2026-02-17 03:17:11.366297 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:11.366310 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:11.366314 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:11.366318 | orchestrator | 2026-02-17 03:17:11.366322 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-17 03:17:11.366337 | orchestrator | Tuesday 17 February 2026 03:17:06 +0000 (0:00:00.333) 0:02:27.189 ****** 2026-02-17 03:17:11.366342 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:17:11.366346 | orchestrator | 2026-02-17 03:17:11.366350 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-17 03:17:11.366356 | orchestrator | Tuesday 17 February 2026 03:17:07 +0000 (0:00:01.408) 0:02:28.597 ****** 2026-02-17 03:17:11.366363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 03:17:11.366375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:17:11.366386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 03:17:11.366397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:17:11.366411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 03:17:16.747784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:17:16.747865 | orchestrator | 2026-02-17 03:17:16.747875 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-17 03:17:16.747921 | orchestrator | Tuesday 17 February 2026 03:17:11 +0000 (0:00:03.440) 0:02:32.037 ****** 2026-02-17 03:17:16.747928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 03:17:16.747963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:17:16.747983 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:16.747991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 03:17:16.748008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:17:16.748012 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:16.748016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 03:17:16.748020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:17:16.748028 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:16.748032 | orchestrator | 2026-02-17 03:17:16.748036 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-17 03:17:16.748040 | orchestrator | Tuesday 17 February 2026 03:17:12 +0000 (0:00:00.723) 0:02:32.761 ****** 2026-02-17 03:17:16.748045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-17 03:17:16.748051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-17 03:17:16.748057 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:16.748061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-17 03:17:16.748065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-17 03:17:16.748069 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:16.748072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-17 03:17:16.748076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-17 03:17:16.748080 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:16.748084 | orchestrator | 2026-02-17 03:17:16.748090 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-17 03:17:16.748094 | orchestrator | Tuesday 17 February 2026 03:17:12 +0000 (0:00:00.919) 0:02:33.680 ****** 2026-02-17 03:17:16.748098 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:16.748102 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:16.748106 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:16.748110 | orchestrator | 2026-02-17 03:17:16.748114 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-17 03:17:16.748117 | orchestrator | Tuesday 17 February 2026 03:17:14 +0000 (0:00:01.631) 0:02:35.312 ****** 2026-02-17 03:17:16.748121 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:16.748125 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:16.748129 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:16.748133 | orchestrator | 2026-02-17 03:17:16.748137 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-17 03:17:16.748143 | orchestrator | Tuesday 17 February 2026 03:17:16 +0000 (0:00:02.104) 0:02:37.416 ****** 2026-02-17 03:17:21.566604 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:17:21.566681 | orchestrator | 2026-02-17 03:17:21.566687 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-17 03:17:21.566692 | orchestrator | Tuesday 17 February 2026 03:17:17 +0000 (0:00:01.128) 0:02:38.545 ****** 2026-02-17 03:17:21.566698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 03:17:21.566721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:17:21.566728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 03:17:21.566733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 03:17:21.566748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 03:17:21.566764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:17:21.566769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 03:17:21.566779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 03:17:21.566783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 03:17:21.566787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:17:21.566794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 03:17:21.566803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417715 | orchestrator | 2026-02-17 03:17:22.417769 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-17 03:17:22.417777 | orchestrator | Tuesday 17 February 2026 03:17:21 +0000 (0:00:03.780) 0:02:42.326 ****** 2026-02-17 03:17:22.417794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 03:17:22.417801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417817 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:22.417830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 03:17:22.417845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417864 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:22.417869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 03:17:22.417876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 03:17:22.417915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 03:17:33.919671 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:33.919811 | orchestrator | 2026-02-17 03:17:33.919841 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-17 03:17:33.919858 | orchestrator | Tuesday 17 February 2026 03:17:22 +0000 (0:00:00.847) 0:02:43.173 ****** 2026-02-17 03:17:33.919870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-17 03:17:33.919977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-17 03:17:33.919996 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:33.920009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-17 03:17:33.920021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-17 03:17:33.920033 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:33.920044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-17 03:17:33.920056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-17 03:17:33.920067 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:33.920078 | orchestrator | 2026-02-17 03:17:33.920090 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-17 03:17:33.920101 | orchestrator | Tuesday 17 February 2026 03:17:23 +0000 (0:00:00.893) 0:02:44.067 ****** 2026-02-17 03:17:33.920112 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:33.920123 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:33.920134 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:33.920145 | orchestrator | 2026-02-17 03:17:33.920156 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-17 03:17:33.920167 | orchestrator | Tuesday 17 February 2026 03:17:24 +0000 (0:00:01.337) 0:02:45.405 ****** 2026-02-17 03:17:33.920178 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:33.920189 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:33.920200 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:33.920212 | orchestrator | 2026-02-17 03:17:33.920223 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-17 03:17:33.920234 | orchestrator | Tuesday 17 February 2026 03:17:26 +0000 (0:00:02.161) 0:02:47.566 ****** 2026-02-17 03:17:33.920245 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:17:33.920256 | orchestrator | 2026-02-17 03:17:33.920267 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-17 03:17:33.920278 | orchestrator | Tuesday 17 February 2026 03:17:28 +0000 (0:00:01.424) 0:02:48.990 ****** 2026-02-17 03:17:33.920289 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 03:17:33.920301 | orchestrator | 2026-02-17 03:17:33.920336 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-17 03:17:33.920347 | orchestrator | Tuesday 17 February 2026 03:17:31 +0000 (0:00:03.151) 0:02:52.141 ****** 2026-02-17 03:17:33.920404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:17:33.920422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 03:17:33.920434 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:33.920454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:17:33.920476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 03:17:33.920488 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:33.920510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:17:36.571757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 03:17:36.571961 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:36.571987 | orchestrator | 2026-02-17 03:17:36.571999 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-17 03:17:36.572012 | orchestrator | Tuesday 17 February 2026 03:17:33 +0000 (0:00:02.447) 0:02:54.588 ****** 2026-02-17 03:17:36.572072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:17:36.572088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 03:17:36.572100 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:36.572135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:17:36.572167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 03:17:36.572180 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:36.572192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:17:36.572218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 03:17:46.828547 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:46.828663 | orchestrator | 2026-02-17 03:17:46.828680 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-17 03:17:46.828694 | orchestrator | Tuesday 17 February 2026 03:17:36 +0000 (0:00:02.649) 0:02:57.238 ****** 2026-02-17 03:17:46.828708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 03:17:46.828764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 03:17:46.828777 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:46.828789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 03:17:46.828801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 03:17:46.828812 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:46.828824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 03:17:46.828835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 03:17:46.828846 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:46.828858 | orchestrator | 2026-02-17 03:17:46.828869 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-17 03:17:46.828880 | orchestrator | Tuesday 17 February 2026 03:17:39 +0000 (0:00:02.872) 0:03:00.111 ****** 2026-02-17 03:17:46.828937 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:17:46.828980 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:17:46.828992 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:17:46.829004 | orchestrator | 2026-02-17 03:17:46.829018 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-17 03:17:46.829036 | orchestrator | Tuesday 17 February 2026 03:17:41 +0000 (0:00:02.306) 0:03:02.417 ****** 2026-02-17 03:17:46.829055 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:46.829072 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:46.829090 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:46.829108 | orchestrator | 2026-02-17 03:17:46.829126 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-17 03:17:46.829145 | orchestrator | Tuesday 17 February 2026 03:17:43 +0000 (0:00:01.549) 0:03:03.967 ****** 2026-02-17 03:17:46.829164 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:46.829182 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:46.829199 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:46.829216 | orchestrator | 2026-02-17 03:17:46.829234 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-17 03:17:46.829252 | orchestrator | Tuesday 17 February 2026 03:17:43 +0000 (0:00:00.326) 0:03:04.294 ****** 2026-02-17 03:17:46.829269 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:17:46.829288 | orchestrator | 2026-02-17 03:17:46.829304 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-17 03:17:46.829321 | orchestrator | Tuesday 17 February 2026 03:17:44 +0000 (0:00:01.380) 0:03:05.675 ****** 2026-02-17 03:17:46.829370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 03:17:46.829411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 03:17:46.829433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 03:17:46.829453 | orchestrator | 2026-02-17 03:17:46.829469 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-17 03:17:46.829501 | orchestrator | Tuesday 17 February 2026 03:17:46 +0000 (0:00:01.604) 0:03:07.280 ****** 2026-02-17 03:17:46.829538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 03:17:55.480601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 03:17:55.480698 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:55.480712 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:55.480721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 03:17:55.480729 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:55.480738 | orchestrator | 2026-02-17 03:17:55.480747 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-17 03:17:55.480757 | orchestrator | Tuesday 17 February 2026 03:17:47 +0000 (0:00:00.442) 0:03:07.722 ****** 2026-02-17 03:17:55.480767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-17 03:17:55.480776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-17 03:17:55.480782 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:55.480787 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:55.480792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-17 03:17:55.480814 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:55.480819 | orchestrator | 2026-02-17 03:17:55.480855 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-17 03:17:55.480861 | orchestrator | Tuesday 17 February 2026 03:17:47 +0000 (0:00:00.881) 0:03:08.604 ****** 2026-02-17 03:17:55.480866 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:55.480871 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:55.480875 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:55.480882 | orchestrator | 2026-02-17 03:17:55.480942 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-17 03:17:55.480951 | orchestrator | Tuesday 17 February 2026 03:17:48 +0000 (0:00:00.466) 0:03:09.070 ****** 2026-02-17 03:17:55.480958 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:55.480963 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:55.480968 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:55.480973 | orchestrator | 2026-02-17 03:17:55.480978 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-17 03:17:55.480982 | orchestrator | Tuesday 17 February 2026 03:17:49 +0000 (0:00:01.325) 0:03:10.396 ****** 2026-02-17 03:17:55.480987 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:55.480992 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:55.480997 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:17:55.481002 | orchestrator | 2026-02-17 03:17:55.481007 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-17 03:17:55.481012 | orchestrator | Tuesday 17 February 2026 03:17:50 +0000 (0:00:00.358) 0:03:10.755 ****** 2026-02-17 03:17:55.481017 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:17:55.481022 | orchestrator | 2026-02-17 03:17:55.481027 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-17 03:17:55.481031 | orchestrator | Tuesday 17 February 2026 03:17:51 +0000 (0:00:01.565) 0:03:12.321 ****** 2026-02-17 03:17:55.481051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:17:55.481063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.481070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.481083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.481088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-17 03:17:55.481101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.679094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.679204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.679216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.679246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:55.679256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.679264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-17 03:17:55.679287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.679296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.679309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 03:17:55.679325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:55.679334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:17:55.679343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.679361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:17:55.787853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.788055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.788078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.788095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-17 03:17:55.788108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.788159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.788185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.788195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-17 03:17:55.788204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.788213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.788221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.788233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.788256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.899956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.900054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:55.900067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.900079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.900089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-17 03:17:55.900142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:55.900171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.900182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.900193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:55.900202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-17 03:17:55.900213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 03:17:55.900236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:55.900253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:57.186165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.186258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 03:17:57.186278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:57.186292 | orchestrator | 2026-02-17 03:17:57.186303 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-17 03:17:57.186348 | orchestrator | Tuesday 17 February 2026 03:17:56 +0000 (0:00:04.369) 0:03:16.691 ****** 2026-02-17 03:17:57.186368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 03:17:57.186395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.186407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.186418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.186428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-17 03:17:57.186451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.186464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.186474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.186492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.283031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:57.283113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.283142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 03:17:57.283162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.283171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-17 03:17:57.283193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.283203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.283210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.283222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.283232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-17 03:17:57.283240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 03:17:57.283254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.381839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 03:17:57.382000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:57.382089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.382105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.382119 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:17:57.382133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.382167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.382180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.382201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.382213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-17 03:17:57.382225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:57.382237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.382287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.581403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.581505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-17 03:17:57.581523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.581567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.581590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.581610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.581652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:57.581715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 03:17:57.581740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 03:17:57.581752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:17:57.581765 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:17:57.581778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-17 03:17:57.581790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-17 03:17:57.581810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 03:18:08.632288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 03:18:08.632414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:18:08.632427 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:08.632436 | orchestrator | 2026-02-17 03:18:08.632444 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-17 03:18:08.632452 | orchestrator | Tuesday 17 February 2026 03:17:57 +0000 (0:00:01.562) 0:03:18.253 ****** 2026-02-17 03:18:08.632460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-17 03:18:08.632469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-17 03:18:08.632477 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:08.632484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-17 03:18:08.632491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-17 03:18:08.632498 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:08.632505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-17 03:18:08.632512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-17 03:18:08.632539 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:08.632547 | orchestrator | 2026-02-17 03:18:08.632554 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-17 03:18:08.632560 | orchestrator | Tuesday 17 February 2026 03:17:59 +0000 (0:00:02.069) 0:03:20.323 ****** 2026-02-17 03:18:08.632567 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:18:08.632574 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:18:08.632581 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:18:08.632588 | orchestrator | 2026-02-17 03:18:08.632595 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-17 03:18:08.632602 | orchestrator | Tuesday 17 February 2026 03:18:00 +0000 (0:00:01.344) 0:03:21.668 ****** 2026-02-17 03:18:08.632608 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:18:08.632615 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:18:08.632622 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:18:08.632629 | orchestrator | 2026-02-17 03:18:08.632636 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-17 03:18:08.632642 | orchestrator | Tuesday 17 February 2026 03:18:03 +0000 (0:00:02.080) 0:03:23.748 ****** 2026-02-17 03:18:08.632649 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:18:08.632656 | orchestrator | 2026-02-17 03:18:08.632663 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-17 03:18:08.632684 | orchestrator | Tuesday 17 February 2026 03:18:04 +0000 (0:00:01.246) 0:03:24.994 ****** 2026-02-17 03:18:08.632693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:18:08.632706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:18:08.632714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:18:08.632727 | orchestrator | 2026-02-17 03:18:08.632734 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-17 03:18:08.632741 | orchestrator | Tuesday 17 February 2026 03:18:07 +0000 (0:00:03.656) 0:03:28.651 ****** 2026-02-17 03:18:08.632748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:18:08.632755 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:08.632767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:18:19.362449 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:19.362652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:18:19.362689 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:19.362720 | orchestrator | 2026-02-17 03:18:19.362742 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-17 03:18:19.362764 | orchestrator | Tuesday 17 February 2026 03:18:08 +0000 (0:00:00.655) 0:03:29.307 ****** 2026-02-17 03:18:19.362787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-17 03:18:19.362838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-17 03:18:19.362863 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:19.362885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-17 03:18:19.362936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-17 03:18:19.362955 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:19.362973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-17 03:18:19.362991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-17 03:18:19.363010 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:19.363029 | orchestrator | 2026-02-17 03:18:19.363047 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-17 03:18:19.363066 | orchestrator | Tuesday 17 February 2026 03:18:09 +0000 (0:00:00.778) 0:03:30.085 ****** 2026-02-17 03:18:19.363083 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:18:19.363102 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:18:19.363121 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:18:19.363139 | orchestrator | 2026-02-17 03:18:19.363157 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-17 03:18:19.363176 | orchestrator | Tuesday 17 February 2026 03:18:11 +0000 (0:00:01.956) 0:03:32.042 ****** 2026-02-17 03:18:19.363195 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:18:19.363213 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:18:19.363231 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:18:19.363266 | orchestrator | 2026-02-17 03:18:19.363285 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-17 03:18:19.363304 | orchestrator | Tuesday 17 February 2026 03:18:13 +0000 (0:00:01.924) 0:03:33.966 ****** 2026-02-17 03:18:19.363323 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:18:19.363341 | orchestrator | 2026-02-17 03:18:19.363358 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-17 03:18:19.363376 | orchestrator | Tuesday 17 February 2026 03:18:14 +0000 (0:00:01.637) 0:03:35.604 ****** 2026-02-17 03:18:19.363427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 03:18:19.363482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:18:19.363505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:18:19.363527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 03:18:19.363548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:18:19.363580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:18:20.755685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 03:18:20.755789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:18:20.755804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:18:20.755816 | orchestrator | 2026-02-17 03:18:20.755828 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-17 03:18:20.755840 | orchestrator | Tuesday 17 February 2026 03:18:19 +0000 (0:00:04.428) 0:03:40.033 ****** 2026-02-17 03:18:20.755852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 03:18:20.755958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:18:20.755980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:18:20.755991 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:20.756003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 03:18:20.756014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:18:20.756024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:18:20.756035 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:20.756059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 03:18:34.059033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 03:18:34.059137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 03:18:34.059148 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:34.059158 | orchestrator | 2026-02-17 03:18:34.059166 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-17 03:18:34.059174 | orchestrator | Tuesday 17 February 2026 03:18:20 +0000 (0:00:01.394) 0:03:41.428 ****** 2026-02-17 03:18:34.059183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059218 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:34.059225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059288 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:34.059299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-17 03:18:34.059376 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:34.059387 | orchestrator | 2026-02-17 03:18:34.059397 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-17 03:18:34.059407 | orchestrator | Tuesday 17 February 2026 03:18:21 +0000 (0:00:00.982) 0:03:42.410 ****** 2026-02-17 03:18:34.059418 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:18:34.059429 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:18:34.059439 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:18:34.059449 | orchestrator | 2026-02-17 03:18:34.059459 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-17 03:18:34.059470 | orchestrator | Tuesday 17 February 2026 03:18:23 +0000 (0:00:01.438) 0:03:43.849 ****** 2026-02-17 03:18:34.059480 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:18:34.059490 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:18:34.059501 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:18:34.059511 | orchestrator | 2026-02-17 03:18:34.059522 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-17 03:18:34.059532 | orchestrator | Tuesday 17 February 2026 03:18:25 +0000 (0:00:02.418) 0:03:46.267 ****** 2026-02-17 03:18:34.059543 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:18:34.059554 | orchestrator | 2026-02-17 03:18:34.059564 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-17 03:18:34.059575 | orchestrator | Tuesday 17 February 2026 03:18:27 +0000 (0:00:01.729) 0:03:47.996 ****** 2026-02-17 03:18:34.059586 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-17 03:18:34.059599 | orchestrator | 2026-02-17 03:18:34.059609 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-17 03:18:34.059619 | orchestrator | Tuesday 17 February 2026 03:18:28 +0000 (0:00:00.877) 0:03:48.874 ****** 2026-02-17 03:18:34.059632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-17 03:18:34.059654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-17 03:18:34.059666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-17 03:18:34.059677 | orchestrator | 2026-02-17 03:18:34.059688 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-17 03:18:34.059700 | orchestrator | Tuesday 17 February 2026 03:18:32 +0000 (0:00:04.334) 0:03:53.209 ****** 2026-02-17 03:18:34.059711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:34.059721 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:34.059743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:53.970476 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:53.970604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:53.970626 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:53.970639 | orchestrator | 2026-02-17 03:18:53.970672 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-17 03:18:53.970697 | orchestrator | Tuesday 17 February 2026 03:18:34 +0000 (0:00:01.520) 0:03:54.729 ****** 2026-02-17 03:18:53.970713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 03:18:53.970729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 03:18:53.970770 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:53.970783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 03:18:53.970798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 03:18:53.970812 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:53.970825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 03:18:53.970838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 03:18:53.970850 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:53.970864 | orchestrator | 2026-02-17 03:18:53.970878 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-17 03:18:53.970891 | orchestrator | Tuesday 17 February 2026 03:18:35 +0000 (0:00:01.723) 0:03:56.452 ****** 2026-02-17 03:18:53.970904 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:18:53.970973 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:18:53.970989 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:18:53.971004 | orchestrator | 2026-02-17 03:18:53.971018 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-17 03:18:53.971034 | orchestrator | Tuesday 17 February 2026 03:18:38 +0000 (0:00:02.708) 0:03:59.161 ****** 2026-02-17 03:18:53.971048 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:18:53.971062 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:18:53.971076 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:18:53.971092 | orchestrator | 2026-02-17 03:18:53.971107 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-17 03:18:53.971121 | orchestrator | Tuesday 17 February 2026 03:18:41 +0000 (0:00:03.000) 0:04:02.162 ****** 2026-02-17 03:18:53.971137 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-17 03:18:53.971153 | orchestrator | 2026-02-17 03:18:53.971168 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-17 03:18:53.971183 | orchestrator | Tuesday 17 February 2026 03:18:42 +0000 (0:00:01.103) 0:04:03.266 ****** 2026-02-17 03:18:53.971217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:53.971235 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:53.971274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:53.971302 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:53.971316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:53.971330 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:53.971345 | orchestrator | 2026-02-17 03:18:53.971359 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-17 03:18:53.971372 | orchestrator | Tuesday 17 February 2026 03:18:43 +0000 (0:00:01.375) 0:04:04.641 ****** 2026-02-17 03:18:53.971385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:53.971399 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:53.971412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:53.971427 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:53.971441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 03:18:53.971456 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:53.971470 | orchestrator | 2026-02-17 03:18:53.971483 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-17 03:18:53.971498 | orchestrator | Tuesday 17 February 2026 03:18:45 +0000 (0:00:01.327) 0:04:05.968 ****** 2026-02-17 03:18:53.971512 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:18:53.971526 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:18:53.971540 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:18:53.971554 | orchestrator | 2026-02-17 03:18:53.971567 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-17 03:18:53.971581 | orchestrator | Tuesday 17 February 2026 03:18:46 +0000 (0:00:01.548) 0:04:07.516 ****** 2026-02-17 03:18:53.971595 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:18:53.971609 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:18:53.971623 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:18:53.971637 | orchestrator | 2026-02-17 03:18:53.971650 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-17 03:18:53.971664 | orchestrator | Tuesday 17 February 2026 03:18:50 +0000 (0:00:03.183) 0:04:10.699 ****** 2026-02-17 03:18:53.971686 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:18:53.971699 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:18:53.971713 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:18:53.971726 | orchestrator | 2026-02-17 03:18:53.971746 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-17 03:18:53.971760 | orchestrator | Tuesday 17 February 2026 03:18:52 +0000 (0:00:02.719) 0:04:13.419 ****** 2026-02-17 03:18:53.971774 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-17 03:18:53.971788 | orchestrator | 2026-02-17 03:18:53.971810 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-17 03:19:10.033153 | orchestrator | Tuesday 17 February 2026 03:18:53 +0000 (0:00:01.219) 0:04:14.638 ****** 2026-02-17 03:19:10.033251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 03:19:10.033261 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:10.033267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 03:19:10.033271 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:10.033275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 03:19:10.033280 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:10.033284 | orchestrator | 2026-02-17 03:19:10.033290 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-17 03:19:10.033295 | orchestrator | Tuesday 17 February 2026 03:18:55 +0000 (0:00:01.413) 0:04:16.052 ****** 2026-02-17 03:19:10.033299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 03:19:10.033304 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:10.033308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 03:19:10.033329 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:10.033335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 03:19:10.033341 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:10.033348 | orchestrator | 2026-02-17 03:19:10.033368 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-17 03:19:10.033374 | orchestrator | Tuesday 17 February 2026 03:18:56 +0000 (0:00:01.513) 0:04:17.565 ****** 2026-02-17 03:19:10.033381 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:10.034207 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:10.034245 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:10.034253 | orchestrator | 2026-02-17 03:19:10.034262 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-17 03:19:10.034293 | orchestrator | Tuesday 17 February 2026 03:18:58 +0000 (0:00:02.095) 0:04:19.661 ****** 2026-02-17 03:19:10.034301 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:19:10.034308 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:19:10.034315 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:19:10.034322 | orchestrator | 2026-02-17 03:19:10.034329 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-17 03:19:10.034337 | orchestrator | Tuesday 17 February 2026 03:19:01 +0000 (0:00:02.437) 0:04:22.099 ****** 2026-02-17 03:19:10.034344 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:19:10.034350 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:19:10.034356 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:19:10.034362 | orchestrator | 2026-02-17 03:19:10.034369 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-17 03:19:10.034376 | orchestrator | Tuesday 17 February 2026 03:19:04 +0000 (0:00:03.284) 0:04:25.383 ****** 2026-02-17 03:19:10.034381 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:19:10.034385 | orchestrator | 2026-02-17 03:19:10.034389 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-17 03:19:10.034394 | orchestrator | Tuesday 17 February 2026 03:19:06 +0000 (0:00:01.704) 0:04:27.088 ****** 2026-02-17 03:19:10.034400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:10.034408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 03:19:10.034432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 03:19:10.034443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 03:19:10.034471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:19:10.865166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:10.865258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 03:19:10.865269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 03:19:10.865298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 03:19:10.865306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:19:10.865328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:10.865336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 03:19:10.865343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 03:19:10.865389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 03:19:10.865404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:19:10.865411 | orchestrator | 2026-02-17 03:19:10.865419 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-17 03:19:10.865427 | orchestrator | Tuesday 17 February 2026 03:19:10 +0000 (0:00:03.760) 0:04:30.849 ****** 2026-02-17 03:19:10.865441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 03:19:10.865455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 03:19:11.012573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 03:19:11.012682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 03:19:11.012724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 03:19:11.012738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 03:19:11.012759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 03:19:11.012800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:19:11.012823 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:11.012869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 03:19:11.012893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:19:11.012989 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:11.013010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 03:19:11.013023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 03:19:11.013034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 03:19:11.013053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 03:19:11.013065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 03:19:23.867902 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:23.868065 | orchestrator | 2026-02-17 03:19:23.868085 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-17 03:19:23.868098 | orchestrator | Tuesday 17 February 2026 03:19:11 +0000 (0:00:00.838) 0:04:31.687 ****** 2026-02-17 03:19:23.868111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 03:19:23.868151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 03:19:23.868165 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:23.868177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 03:19:23.868189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 03:19:23.868200 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:23.868211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 03:19:23.868222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 03:19:23.868233 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:23.868244 | orchestrator | 2026-02-17 03:19:23.868256 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-17 03:19:23.868267 | orchestrator | Tuesday 17 February 2026 03:19:12 +0000 (0:00:01.075) 0:04:32.763 ****** 2026-02-17 03:19:23.868278 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:19:23.868289 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:19:23.868300 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:19:23.868311 | orchestrator | 2026-02-17 03:19:23.868323 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-17 03:19:23.868334 | orchestrator | Tuesday 17 February 2026 03:19:14 +0000 (0:00:02.131) 0:04:34.894 ****** 2026-02-17 03:19:23.868345 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:19:23.868356 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:19:23.868367 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:19:23.868379 | orchestrator | 2026-02-17 03:19:23.868390 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-17 03:19:23.868401 | orchestrator | Tuesday 17 February 2026 03:19:16 +0000 (0:00:02.319) 0:04:37.213 ****** 2026-02-17 03:19:23.868412 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:19:23.868424 | orchestrator | 2026-02-17 03:19:23.868437 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-17 03:19:23.868449 | orchestrator | Tuesday 17 February 2026 03:19:18 +0000 (0:00:01.477) 0:04:38.691 ****** 2026-02-17 03:19:23.868484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:19:23.868520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:19:23.868544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:19:23.868560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:19:23.868581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:19:23.868606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:19:25.987162 | orchestrator | 2026-02-17 03:19:25.987236 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-17 03:19:25.987246 | orchestrator | Tuesday 17 February 2026 03:19:23 +0000 (0:00:05.842) 0:04:44.533 ****** 2026-02-17 03:19:25.987257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:19:25.987269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:19:25.987279 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:25.987305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:19:25.987314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:19:25.987356 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:25.987364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:19:25.987373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:19:25.987380 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:25.987388 | orchestrator | 2026-02-17 03:19:25.987396 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-17 03:19:25.987403 | orchestrator | Tuesday 17 February 2026 03:19:24 +0000 (0:00:01.088) 0:04:45.622 ****** 2026-02-17 03:19:25.987413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-17 03:19:25.987423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-17 03:19:25.987433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-17 03:19:25.987459 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:25.987470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-17 03:19:25.987478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-17 03:19:25.987484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-17 03:19:25.987495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-17 03:19:25.987503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-17 03:19:25.987511 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:25.987524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-17 03:19:32.482254 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:32.482355 | orchestrator | 2026-02-17 03:19:32.482366 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-17 03:19:32.482373 | orchestrator | Tuesday 17 February 2026 03:19:25 +0000 (0:00:01.031) 0:04:46.654 ****** 2026-02-17 03:19:32.482379 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:32.482385 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:32.482390 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:32.482395 | orchestrator | 2026-02-17 03:19:32.482400 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-17 03:19:32.482406 | orchestrator | Tuesday 17 February 2026 03:19:26 +0000 (0:00:00.480) 0:04:47.135 ****** 2026-02-17 03:19:32.482411 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:32.482416 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:32.482422 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:32.482427 | orchestrator | 2026-02-17 03:19:32.482432 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-17 03:19:32.482437 | orchestrator | Tuesday 17 February 2026 03:19:27 +0000 (0:00:01.541) 0:04:48.677 ****** 2026-02-17 03:19:32.482443 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:19:32.482449 | orchestrator | 2026-02-17 03:19:32.482463 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-17 03:19:32.482468 | orchestrator | Tuesday 17 February 2026 03:19:29 +0000 (0:00:01.763) 0:04:50.441 ****** 2026-02-17 03:19:32.482476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-17 03:19:32.482503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 03:19:32.482521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:32.482527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:32.482535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 03:19:32.482554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-17 03:19:32.482560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 03:19:32.482566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:32.482576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:32.482582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 03:19:32.482591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-17 03:19:32.482596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 03:19:32.482607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:34.279440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:34.279558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 03:19:34.279608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-17 03:19:34.279639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-17 03:19:34.279651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:34.279661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:34.279690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 03:19:34.279701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-17 03:19:34.279746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-17 03:19:34.279758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:34.279769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:34.279780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 03:19:34.279799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-17 03:19:35.115101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-17 03:19:35.115184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.115210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.115218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 03:19:35.115225 | orchestrator | 2026-02-17 03:19:35.115233 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-17 03:19:35.115241 | orchestrator | Tuesday 17 February 2026 03:19:34 +0000 (0:00:04.691) 0:04:55.132 ****** 2026-02-17 03:19:35.115248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-17 03:19:35.115256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 03:19:35.115295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.115303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.115313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 03:19:35.115328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-17 03:19:35.115337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-17 03:19:35.115346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.115377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.279948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 03:19:35.280056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-17 03:19:35.280092 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:35.280109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 03:19:35.280122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.280136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.280149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 03:19:35.280206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-17 03:19:35.280223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-17 03:19:35.280243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-17 03:19:35.280258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.280271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 03:19:35.280291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:35.280311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:37.305770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 03:19:37.305841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:37.305848 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:37.305868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 03:19:37.305877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-17 03:19:37.305883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-17 03:19:37.305907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:37.305923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 03:19:37.305954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 03:19:37.305959 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:37.305963 | orchestrator | 2026-02-17 03:19:37.305968 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-17 03:19:37.305973 | orchestrator | Tuesday 17 February 2026 03:19:35 +0000 (0:00:01.029) 0:04:56.161 ****** 2026-02-17 03:19:37.305981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-17 03:19:37.305989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-17 03:19:37.305996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-17 03:19:37.306003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-17 03:19:37.306008 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:37.306046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-17 03:19:37.306056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-17 03:19:37.306060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-17 03:19:37.306064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-17 03:19:37.306068 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:37.306072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-17 03:19:37.306076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-17 03:19:37.306080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-17 03:19:37.306088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-17 03:19:46.397840 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:46.397996 | orchestrator | 2026-02-17 03:19:46.398067 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-17 03:19:46.398082 | orchestrator | Tuesday 17 February 2026 03:19:37 +0000 (0:00:01.806) 0:04:57.968 ****** 2026-02-17 03:19:46.398093 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:46.398103 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:46.398113 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:46.398123 | orchestrator | 2026-02-17 03:19:46.398133 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-17 03:19:46.398143 | orchestrator | Tuesday 17 February 2026 03:19:37 +0000 (0:00:00.518) 0:04:58.487 ****** 2026-02-17 03:19:46.398153 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:46.398163 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:46.398173 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:46.398183 | orchestrator | 2026-02-17 03:19:46.398193 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-17 03:19:46.398203 | orchestrator | Tuesday 17 February 2026 03:19:39 +0000 (0:00:01.790) 0:05:00.277 ****** 2026-02-17 03:19:46.398213 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:19:46.398223 | orchestrator | 2026-02-17 03:19:46.398233 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-17 03:19:46.398242 | orchestrator | Tuesday 17 February 2026 03:19:41 +0000 (0:00:02.055) 0:05:02.332 ****** 2026-02-17 03:19:46.398256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:19:46.398296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:19:46.398346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:19:46.398358 | orchestrator | 2026-02-17 03:19:46.398368 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-17 03:19:46.398403 | orchestrator | Tuesday 17 February 2026 03:19:44 +0000 (0:00:02.361) 0:05:04.694 ****** 2026-02-17 03:19:46.398422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 03:19:46.398443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 03:19:46.398455 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:46.398467 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:46.398478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 03:19:46.398490 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:46.398501 | orchestrator | 2026-02-17 03:19:46.398513 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-17 03:19:46.398524 | orchestrator | Tuesday 17 February 2026 03:19:44 +0000 (0:00:00.506) 0:05:05.200 ****** 2026-02-17 03:19:46.398537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-17 03:19:46.398549 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:46.398561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-17 03:19:46.398572 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:46.398583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-17 03:19:46.398593 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:46.398604 | orchestrator | 2026-02-17 03:19:46.398615 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-17 03:19:46.398626 | orchestrator | Tuesday 17 February 2026 03:19:45 +0000 (0:00:01.283) 0:05:06.484 ****** 2026-02-17 03:19:46.398644 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:58.544171 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:58.544325 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:58.544347 | orchestrator | 2026-02-17 03:19:58.544362 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-17 03:19:58.544377 | orchestrator | Tuesday 17 February 2026 03:19:46 +0000 (0:00:00.592) 0:05:07.076 ****** 2026-02-17 03:19:58.544389 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:19:58.544426 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:19:58.544439 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:19:58.544451 | orchestrator | 2026-02-17 03:19:58.544464 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-17 03:19:58.544475 | orchestrator | Tuesday 17 February 2026 03:19:48 +0000 (0:00:01.797) 0:05:08.874 ****** 2026-02-17 03:19:58.544487 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:19:58.544499 | orchestrator | 2026-02-17 03:19:58.544510 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-17 03:19:58.544522 | orchestrator | Tuesday 17 February 2026 03:19:49 +0000 (0:00:01.760) 0:05:10.634 ****** 2026-02-17 03:19:58.544554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:58.544602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:58.544616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:58.544652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:58.544687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:58.544702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 03:19:58.544716 | orchestrator | 2026-02-17 03:19:58.544730 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-17 03:19:58.544744 | orchestrator | Tuesday 17 February 2026 03:19:57 +0000 (0:00:07.871) 0:05:18.506 ****** 2026-02-17 03:19:58.544758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 03:19:58.544783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 03:20:04.561482 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:04.561591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 03:20:04.561606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 03:20:04.561616 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:04.561624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 03:20:04.561631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 03:20:04.561656 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:04.561664 | orchestrator | 2026-02-17 03:20:04.561671 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-17 03:20:04.561679 | orchestrator | Tuesday 17 February 2026 03:19:58 +0000 (0:00:00.712) 0:05:19.219 ****** 2026-02-17 03:20:04.561700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561737 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:04.561744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561772 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:04.561779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-17 03:20:04.561806 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:04.561819 | orchestrator | 2026-02-17 03:20:04.561826 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-17 03:20:04.561832 | orchestrator | Tuesday 17 February 2026 03:19:59 +0000 (0:00:00.968) 0:05:20.188 ****** 2026-02-17 03:20:04.561839 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:20:04.561846 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:20:04.561853 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:20:04.561860 | orchestrator | 2026-02-17 03:20:04.561867 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-17 03:20:04.561873 | orchestrator | Tuesday 17 February 2026 03:20:00 +0000 (0:00:01.345) 0:05:21.533 ****** 2026-02-17 03:20:04.561880 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:20:04.561887 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:20:04.561894 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:20:04.561900 | orchestrator | 2026-02-17 03:20:04.561908 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-17 03:20:04.561914 | orchestrator | Tuesday 17 February 2026 03:20:03 +0000 (0:00:02.285) 0:05:23.819 ****** 2026-02-17 03:20:04.561921 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:04.561928 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:04.561935 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:04.561982 | orchestrator | 2026-02-17 03:20:04.561990 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-17 03:20:04.561997 | orchestrator | Tuesday 17 February 2026 03:20:03 +0000 (0:00:00.704) 0:05:24.523 ****** 2026-02-17 03:20:04.562003 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:04.562010 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:04.562066 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:04.562074 | orchestrator | 2026-02-17 03:20:04.562082 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-17 03:20:04.562091 | orchestrator | Tuesday 17 February 2026 03:20:04 +0000 (0:00:00.365) 0:05:24.888 ****** 2026-02-17 03:20:04.562099 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:04.562113 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.570059 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.570159 | orchestrator | 2026-02-17 03:20:51.570171 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-17 03:20:51.570181 | orchestrator | Tuesday 17 February 2026 03:20:04 +0000 (0:00:00.354) 0:05:25.242 ****** 2026-02-17 03:20:51.570188 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.570195 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.570202 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.570210 | orchestrator | 2026-02-17 03:20:51.570217 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-17 03:20:51.570225 | orchestrator | Tuesday 17 February 2026 03:20:04 +0000 (0:00:00.343) 0:05:25.586 ****** 2026-02-17 03:20:51.570232 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.570240 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.570248 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.570255 | orchestrator | 2026-02-17 03:20:51.570262 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-17 03:20:51.570283 | orchestrator | Tuesday 17 February 2026 03:20:05 +0000 (0:00:00.746) 0:05:26.332 ****** 2026-02-17 03:20:51.570291 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.570298 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.570305 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.570312 | orchestrator | 2026-02-17 03:20:51.570319 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-17 03:20:51.570326 | orchestrator | Tuesday 17 February 2026 03:20:06 +0000 (0:00:00.599) 0:05:26.932 ****** 2026-02-17 03:20:51.570334 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.570341 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.570349 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.570356 | orchestrator | 2026-02-17 03:20:51.570363 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-17 03:20:51.570386 | orchestrator | Tuesday 17 February 2026 03:20:06 +0000 (0:00:00.697) 0:05:27.630 ****** 2026-02-17 03:20:51.570393 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.570399 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.570407 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.570414 | orchestrator | 2026-02-17 03:20:51.570421 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-17 03:20:51.570428 | orchestrator | Tuesday 17 February 2026 03:20:07 +0000 (0:00:00.761) 0:05:28.391 ****** 2026-02-17 03:20:51.570435 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.570442 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.570449 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.570456 | orchestrator | 2026-02-17 03:20:51.570463 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-17 03:20:51.570470 | orchestrator | Tuesday 17 February 2026 03:20:08 +0000 (0:00:00.922) 0:05:29.314 ****** 2026-02-17 03:20:51.570478 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.570485 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.570492 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.570499 | orchestrator | 2026-02-17 03:20:51.570506 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-17 03:20:51.570513 | orchestrator | Tuesday 17 February 2026 03:20:09 +0000 (0:00:00.864) 0:05:30.179 ****** 2026-02-17 03:20:51.570520 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.570527 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.570534 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.570541 | orchestrator | 2026-02-17 03:20:51.570547 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-17 03:20:51.570554 | orchestrator | Tuesday 17 February 2026 03:20:10 +0000 (0:00:00.894) 0:05:31.073 ****** 2026-02-17 03:20:51.570561 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:20:51.570568 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:20:51.570574 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:20:51.570580 | orchestrator | 2026-02-17 03:20:51.570586 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-17 03:20:51.570592 | orchestrator | Tuesday 17 February 2026 03:20:18 +0000 (0:00:08.470) 0:05:39.543 ****** 2026-02-17 03:20:51.570599 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.570605 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.570612 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.570620 | orchestrator | 2026-02-17 03:20:51.570627 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-17 03:20:51.570634 | orchestrator | Tuesday 17 February 2026 03:20:20 +0000 (0:00:01.205) 0:05:40.749 ****** 2026-02-17 03:20:51.570642 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:20:51.570649 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:20:51.570657 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:20:51.570664 | orchestrator | 2026-02-17 03:20:51.570672 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-17 03:20:51.570680 | orchestrator | Tuesday 17 February 2026 03:20:31 +0000 (0:00:11.216) 0:05:51.965 ****** 2026-02-17 03:20:51.570687 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.570694 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.570702 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.570709 | orchestrator | 2026-02-17 03:20:51.570716 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-17 03:20:51.570723 | orchestrator | Tuesday 17 February 2026 03:20:36 +0000 (0:00:04.786) 0:05:56.752 ****** 2026-02-17 03:20:51.570731 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:20:51.570738 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:20:51.570745 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:20:51.570752 | orchestrator | 2026-02-17 03:20:51.570760 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-17 03:20:51.570768 | orchestrator | Tuesday 17 February 2026 03:20:45 +0000 (0:00:09.762) 0:06:06.514 ****** 2026-02-17 03:20:51.570784 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.570791 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.570799 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.570806 | orchestrator | 2026-02-17 03:20:51.570813 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-17 03:20:51.570821 | orchestrator | Tuesday 17 February 2026 03:20:46 +0000 (0:00:00.755) 0:06:07.269 ****** 2026-02-17 03:20:51.570828 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.570835 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.570843 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.570850 | orchestrator | 2026-02-17 03:20:51.570872 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-17 03:20:51.570880 | orchestrator | Tuesday 17 February 2026 03:20:46 +0000 (0:00:00.381) 0:06:07.650 ****** 2026-02-17 03:20:51.570887 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.570895 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.570902 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.570909 | orchestrator | 2026-02-17 03:20:51.570917 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-17 03:20:51.570925 | orchestrator | Tuesday 17 February 2026 03:20:47 +0000 (0:00:00.387) 0:06:08.037 ****** 2026-02-17 03:20:51.570932 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.570939 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.570946 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.570952 | orchestrator | 2026-02-17 03:20:51.570976 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-17 03:20:51.570983 | orchestrator | Tuesday 17 February 2026 03:20:47 +0000 (0:00:00.356) 0:06:08.394 ****** 2026-02-17 03:20:51.570989 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.571001 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.571007 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.571013 | orchestrator | 2026-02-17 03:20:51.571019 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-17 03:20:51.571025 | orchestrator | Tuesday 17 February 2026 03:20:48 +0000 (0:00:00.767) 0:06:09.161 ****** 2026-02-17 03:20:51.571031 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:20:51.571038 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:20:51.571044 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:20:51.571051 | orchestrator | 2026-02-17 03:20:51.571057 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-17 03:20:51.571063 | orchestrator | Tuesday 17 February 2026 03:20:48 +0000 (0:00:00.377) 0:06:09.539 ****** 2026-02-17 03:20:51.571070 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.571076 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.571084 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.571090 | orchestrator | 2026-02-17 03:20:51.571099 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-17 03:20:51.571107 | orchestrator | Tuesday 17 February 2026 03:20:49 +0000 (0:00:00.953) 0:06:10.493 ****** 2026-02-17 03:20:51.571112 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:20:51.571118 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:20:51.571127 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:20:51.571133 | orchestrator | 2026-02-17 03:20:51.571139 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:20:51.571146 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-17 03:20:51.571154 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-17 03:20:51.571160 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-17 03:20:51.571166 | orchestrator | 2026-02-17 03:20:51.571182 | orchestrator | 2026-02-17 03:20:51.571190 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:20:51.571196 | orchestrator | Tuesday 17 February 2026 03:20:50 +0000 (0:00:00.874) 0:06:11.367 ****** 2026-02-17 03:20:51.571203 | orchestrator | =============================================================================== 2026-02-17 03:20:51.571210 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.22s 2026-02-17 03:20:51.571216 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.76s 2026-02-17 03:20:51.571222 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.47s 2026-02-17 03:20:51.571229 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.87s 2026-02-17 03:20:51.571235 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.84s 2026-02-17 03:20:51.571242 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.79s 2026-02-17 03:20:51.571249 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.69s 2026-02-17 03:20:51.571256 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.43s 2026-02-17 03:20:51.571263 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.37s 2026-02-17 03:20:51.571270 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.33s 2026-02-17 03:20:51.571276 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.16s 2026-02-17 03:20:51.571284 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.86s 2026-02-17 03:20:51.571290 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.78s 2026-02-17 03:20:51.571297 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.76s 2026-02-17 03:20:51.571304 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.66s 2026-02-17 03:20:51.571311 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.60s 2026-02-17 03:20:51.571317 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.60s 2026-02-17 03:20:51.571324 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.44s 2026-02-17 03:20:51.571331 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.44s 2026-02-17 03:20:51.571337 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.43s 2026-02-17 03:20:54.034847 | orchestrator | 2026-02-17 03:20:54 | INFO  | Task 4f41892b-1e01-4a4d-a9a2-4af59d9d44be (opensearch) was prepared for execution. 2026-02-17 03:20:54.034940 | orchestrator | 2026-02-17 03:20:54 | INFO  | It takes a moment until task 4f41892b-1e01-4a4d-a9a2-4af59d9d44be (opensearch) has been started and output is visible here. 2026-02-17 03:21:05.103482 | orchestrator | 2026-02-17 03:21:05.103623 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:21:05.103651 | orchestrator | 2026-02-17 03:21:05.103673 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:21:05.103692 | orchestrator | Tuesday 17 February 2026 03:20:58 +0000 (0:00:00.261) 0:00:00.261 ****** 2026-02-17 03:21:05.103736 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:21:05.103772 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:21:05.103791 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:21:05.103809 | orchestrator | 2026-02-17 03:21:05.103828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:21:05.103847 | orchestrator | Tuesday 17 February 2026 03:20:58 +0000 (0:00:00.294) 0:00:00.555 ****** 2026-02-17 03:21:05.103889 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-17 03:21:05.103910 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-17 03:21:05.103928 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-17 03:21:05.103948 | orchestrator | 2026-02-17 03:21:05.103993 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-17 03:21:05.104079 | orchestrator | 2026-02-17 03:21:05.104100 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-17 03:21:05.104118 | orchestrator | Tuesday 17 February 2026 03:20:59 +0000 (0:00:00.468) 0:00:01.023 ****** 2026-02-17 03:21:05.104138 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:21:05.104159 | orchestrator | 2026-02-17 03:21:05.104178 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-17 03:21:05.104212 | orchestrator | Tuesday 17 February 2026 03:20:59 +0000 (0:00:00.518) 0:00:01.542 ****** 2026-02-17 03:21:05.104224 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 03:21:05.104235 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 03:21:05.104247 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 03:21:05.104258 | orchestrator | 2026-02-17 03:21:05.104269 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-17 03:21:05.104279 | orchestrator | Tuesday 17 February 2026 03:21:00 +0000 (0:00:00.641) 0:00:02.184 ****** 2026-02-17 03:21:05.104293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:05.104308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:05.104344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:05.104367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:05.104390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:05.104403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:05.104416 | orchestrator | 2026-02-17 03:21:05.104427 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-17 03:21:05.104438 | orchestrator | Tuesday 17 February 2026 03:21:02 +0000 (0:00:01.727) 0:00:03.912 ****** 2026-02-17 03:21:05.104449 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:21:05.104460 | orchestrator | 2026-02-17 03:21:05.104471 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-17 03:21:05.104482 | orchestrator | Tuesday 17 February 2026 03:21:02 +0000 (0:00:00.554) 0:00:04.466 ****** 2026-02-17 03:21:05.104507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:06.034799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:06.034916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:06.034946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:06.034994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:06.035075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:06.035092 | orchestrator | 2026-02-17 03:21:06.035105 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-17 03:21:06.035117 | orchestrator | Tuesday 17 February 2026 03:21:05 +0000 (0:00:02.440) 0:00:06.907 ****** 2026-02-17 03:21:06.035130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:21:06.035142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:21:06.035155 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:21:06.035167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:21:06.035202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:21:07.283021 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:21:07.283119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:21:07.283138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:21:07.283150 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:21:07.283161 | orchestrator | 2026-02-17 03:21:07.283172 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-17 03:21:07.283183 | orchestrator | Tuesday 17 February 2026 03:21:06 +0000 (0:00:00.924) 0:00:07.831 ****** 2026-02-17 03:21:07.283217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:21:07.283244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:21:07.283273 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:21:07.283284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:21:07.283295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:21:07.283306 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:21:07.283324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-17 03:21:07.283341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-17 03:21:07.283351 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:21:07.283361 | orchestrator | 2026-02-17 03:21:07.283371 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-17 03:21:07.283389 | orchestrator | Tuesday 17 February 2026 03:21:07 +0000 (0:00:01.247) 0:00:09.079 ****** 2026-02-17 03:21:15.522945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:15.523084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:15.523100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:15.523150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:15.523183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:15.523197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:21:15.523218 | orchestrator | 2026-02-17 03:21:15.523229 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-17 03:21:15.523240 | orchestrator | Tuesday 17 February 2026 03:21:09 +0000 (0:00:02.337) 0:00:11.417 ****** 2026-02-17 03:21:15.523251 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:21:15.523261 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:21:15.523271 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:21:15.523281 | orchestrator | 2026-02-17 03:21:15.523291 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-17 03:21:15.523301 | orchestrator | Tuesday 17 February 2026 03:21:11 +0000 (0:00:02.280) 0:00:13.697 ****** 2026-02-17 03:21:15.523311 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:21:15.523321 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:21:15.523330 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:21:15.523340 | orchestrator | 2026-02-17 03:21:15.523350 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-17 03:21:15.523360 | orchestrator | Tuesday 17 February 2026 03:21:13 +0000 (0:00:01.846) 0:00:15.544 ****** 2026-02-17 03:21:15.523371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:15.523387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:21:15.523405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-17 03:23:58.272774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:23:58.272869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:23:58.272891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-17 03:23:58.272902 | orchestrator | 2026-02-17 03:23:58.272911 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-17 03:23:58.272919 | orchestrator | Tuesday 17 February 2026 03:21:15 +0000 (0:00:01.780) 0:00:17.324 ****** 2026-02-17 03:23:58.272926 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:23:58.272935 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:23:58.272942 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:23:58.272950 | orchestrator | 2026-02-17 03:23:58.272958 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-17 03:23:58.272965 | orchestrator | Tuesday 17 February 2026 03:21:15 +0000 (0:00:00.305) 0:00:17.629 ****** 2026-02-17 03:23:58.272973 | orchestrator | 2026-02-17 03:23:58.272980 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-17 03:23:58.272987 | orchestrator | Tuesday 17 February 2026 03:21:15 +0000 (0:00:00.061) 0:00:17.691 ****** 2026-02-17 03:23:58.272995 | orchestrator | 2026-02-17 03:23:58.273002 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-17 03:23:58.273016 | orchestrator | Tuesday 17 February 2026 03:21:15 +0000 (0:00:00.066) 0:00:17.757 ****** 2026-02-17 03:23:58.273024 | orchestrator | 2026-02-17 03:23:58.273031 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-17 03:23:58.273070 | orchestrator | Tuesday 17 February 2026 03:21:16 +0000 (0:00:00.064) 0:00:17.821 ****** 2026-02-17 03:23:58.273084 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:23:58.273096 | orchestrator | 2026-02-17 03:23:58.273110 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-17 03:23:58.273122 | orchestrator | Tuesday 17 February 2026 03:21:16 +0000 (0:00:00.222) 0:00:18.043 ****** 2026-02-17 03:23:58.273132 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:23:58.273139 | orchestrator | 2026-02-17 03:23:58.273146 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-17 03:23:58.273153 | orchestrator | Tuesday 17 February 2026 03:21:16 +0000 (0:00:00.707) 0:00:18.751 ****** 2026-02-17 03:23:58.273161 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:23:58.273168 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:23:58.273175 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:23:58.273183 | orchestrator | 2026-02-17 03:23:58.273190 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-17 03:23:58.273197 | orchestrator | Tuesday 17 February 2026 03:22:24 +0000 (0:01:07.643) 0:01:26.394 ****** 2026-02-17 03:23:58.273205 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:23:58.273212 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:23:58.273219 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:23:58.273227 | orchestrator | 2026-02-17 03:23:58.273234 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-17 03:23:58.273241 | orchestrator | Tuesday 17 February 2026 03:23:47 +0000 (0:01:22.898) 0:02:49.292 ****** 2026-02-17 03:23:58.273249 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:23:58.273256 | orchestrator | 2026-02-17 03:23:58.273263 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-17 03:23:58.273271 | orchestrator | Tuesday 17 February 2026 03:23:48 +0000 (0:00:00.566) 0:02:49.859 ****** 2026-02-17 03:23:58.273278 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:23:58.273285 | orchestrator | 2026-02-17 03:23:58.273293 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-17 03:23:58.273300 | orchestrator | Tuesday 17 February 2026 03:23:50 +0000 (0:00:02.847) 0:02:52.706 ****** 2026-02-17 03:23:58.273307 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:23:58.273314 | orchestrator | 2026-02-17 03:23:58.273322 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-17 03:23:58.273329 | orchestrator | Tuesday 17 February 2026 03:23:53 +0000 (0:00:02.213) 0:02:54.920 ****** 2026-02-17 03:23:58.273338 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:23:58.273346 | orchestrator | 2026-02-17 03:23:58.273354 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-17 03:23:58.273362 | orchestrator | Tuesday 17 February 2026 03:23:55 +0000 (0:00:02.630) 0:02:57.550 ****** 2026-02-17 03:23:58.273370 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:23:58.273379 | orchestrator | 2026-02-17 03:23:58.273387 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:23:58.273396 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 03:23:58.273405 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 03:23:58.273419 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 03:23:58.273427 | orchestrator | 2026-02-17 03:23:58.273436 | orchestrator | 2026-02-17 03:23:58.273449 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:23:58.273458 | orchestrator | Tuesday 17 February 2026 03:23:58 +0000 (0:00:02.508) 0:03:00.059 ****** 2026-02-17 03:23:58.273466 | orchestrator | =============================================================================== 2026-02-17 03:23:58.273474 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.90s 2026-02-17 03:23:58.273483 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.64s 2026-02-17 03:23:58.273491 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.85s 2026-02-17 03:23:58.273499 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.63s 2026-02-17 03:23:58.273507 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.51s 2026-02-17 03:23:58.273515 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.44s 2026-02-17 03:23:58.273524 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.34s 2026-02-17 03:23:58.273532 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.28s 2026-02-17 03:23:58.273540 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.21s 2026-02-17 03:23:58.273549 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.85s 2026-02-17 03:23:58.273557 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.78s 2026-02-17 03:23:58.273566 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.73s 2026-02-17 03:23:58.273575 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.25s 2026-02-17 03:23:58.273584 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.92s 2026-02-17 03:23:58.273593 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.71s 2026-02-17 03:23:58.273602 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2026-02-17 03:23:58.273616 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-02-17 03:23:58.512440 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-02-17 03:23:58.512498 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-02-17 03:23:58.512506 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-02-17 03:24:00.323091 | orchestrator | 2026-02-17 03:24:00 | INFO  | Task f205817a-8e11-4993-a493-0ae5b7fe0b78 (memcached) was prepared for execution. 2026-02-17 03:24:00.323183 | orchestrator | 2026-02-17 03:24:00 | INFO  | It takes a moment until task f205817a-8e11-4993-a493-0ae5b7fe0b78 (memcached) has been started and output is visible here. 2026-02-17 03:24:13.591256 | orchestrator | 2026-02-17 03:24:13.591351 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:24:13.591364 | orchestrator | 2026-02-17 03:24:13.591371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:24:13.591377 | orchestrator | Tuesday 17 February 2026 03:24:05 +0000 (0:00:00.323) 0:00:00.323 ****** 2026-02-17 03:24:13.591381 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:24:13.591387 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:24:13.591391 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:24:13.591395 | orchestrator | 2026-02-17 03:24:13.591399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:24:13.591403 | orchestrator | Tuesday 17 February 2026 03:24:05 +0000 (0:00:00.349) 0:00:00.672 ****** 2026-02-17 03:24:13.591408 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-17 03:24:13.591412 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-17 03:24:13.591416 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-17 03:24:13.591420 | orchestrator | 2026-02-17 03:24:13.591424 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-17 03:24:13.591445 | orchestrator | 2026-02-17 03:24:13.591449 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-17 03:24:13.591453 | orchestrator | Tuesday 17 February 2026 03:24:06 +0000 (0:00:00.507) 0:00:01.180 ****** 2026-02-17 03:24:13.591457 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:24:13.591462 | orchestrator | 2026-02-17 03:24:13.591466 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-17 03:24:13.591470 | orchestrator | Tuesday 17 February 2026 03:24:06 +0000 (0:00:00.586) 0:00:01.767 ****** 2026-02-17 03:24:13.591473 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-17 03:24:13.591478 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-17 03:24:13.591481 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-17 03:24:13.591485 | orchestrator | 2026-02-17 03:24:13.591489 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-17 03:24:13.591493 | orchestrator | Tuesday 17 February 2026 03:24:07 +0000 (0:00:00.709) 0:00:02.477 ****** 2026-02-17 03:24:13.591497 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-17 03:24:13.591500 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-17 03:24:13.591504 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-17 03:24:13.591508 | orchestrator | 2026-02-17 03:24:13.591512 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-17 03:24:13.591516 | orchestrator | Tuesday 17 February 2026 03:24:09 +0000 (0:00:02.086) 0:00:04.564 ****** 2026-02-17 03:24:13.591531 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:24:13.591535 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:24:13.591539 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:24:13.591542 | orchestrator | 2026-02-17 03:24:13.591546 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-17 03:24:13.591550 | orchestrator | Tuesday 17 February 2026 03:24:11 +0000 (0:00:01.547) 0:00:06.111 ****** 2026-02-17 03:24:13.591554 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:24:13.591558 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:24:13.591562 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:24:13.591566 | orchestrator | 2026-02-17 03:24:13.591569 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:24:13.591573 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:24:13.591579 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:24:13.591583 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:24:13.591587 | orchestrator | 2026-02-17 03:24:13.591591 | orchestrator | 2026-02-17 03:24:13.591594 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:24:13.591598 | orchestrator | Tuesday 17 February 2026 03:24:13 +0000 (0:00:02.077) 0:00:08.188 ****** 2026-02-17 03:24:13.591602 | orchestrator | =============================================================================== 2026-02-17 03:24:13.591606 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.09s 2026-02-17 03:24:13.591610 | orchestrator | memcached : Restart memcached container --------------------------------- 2.08s 2026-02-17 03:24:13.591614 | orchestrator | memcached : Check memcached container ----------------------------------- 1.55s 2026-02-17 03:24:13.591618 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.71s 2026-02-17 03:24:13.591622 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.59s 2026-02-17 03:24:13.591625 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-02-17 03:24:13.591634 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-02-17 03:24:16.482802 | orchestrator | 2026-02-17 03:24:16 | INFO  | Task e36535f0-6e00-4201-8a58-03c2d8614e28 (redis) was prepared for execution. 2026-02-17 03:24:16.482923 | orchestrator | 2026-02-17 03:24:16 | INFO  | It takes a moment until task e36535f0-6e00-4201-8a58-03c2d8614e28 (redis) has been started and output is visible here. 2026-02-17 03:24:26.095831 | orchestrator | 2026-02-17 03:24:26.095951 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:24:26.095969 | orchestrator | 2026-02-17 03:24:26.095981 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:24:26.095993 | orchestrator | Tuesday 17 February 2026 03:24:21 +0000 (0:00:00.299) 0:00:00.299 ****** 2026-02-17 03:24:26.096004 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:24:26.096017 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:24:26.096028 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:24:26.096039 | orchestrator | 2026-02-17 03:24:26.096050 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:24:26.096110 | orchestrator | Tuesday 17 February 2026 03:24:21 +0000 (0:00:00.318) 0:00:00.617 ****** 2026-02-17 03:24:26.096122 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-17 03:24:26.096134 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-17 03:24:26.096145 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-17 03:24:26.096156 | orchestrator | 2026-02-17 03:24:26.096167 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-17 03:24:26.096178 | orchestrator | 2026-02-17 03:24:26.096189 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-17 03:24:26.096201 | orchestrator | Tuesday 17 February 2026 03:24:21 +0000 (0:00:00.440) 0:00:01.058 ****** 2026-02-17 03:24:26.096212 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:24:26.096224 | orchestrator | 2026-02-17 03:24:26.096235 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-17 03:24:26.096246 | orchestrator | Tuesday 17 February 2026 03:24:22 +0000 (0:00:00.505) 0:00:01.563 ****** 2026-02-17 03:24:26.096261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096387 | orchestrator | 2026-02-17 03:24:26.096400 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-17 03:24:26.096413 | orchestrator | Tuesday 17 February 2026 03:24:23 +0000 (0:00:01.101) 0:00:02.665 ****** 2026-02-17 03:24:26.096425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:26.096606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300454 | orchestrator | 2026-02-17 03:24:30.300463 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-17 03:24:30.300470 | orchestrator | Tuesday 17 February 2026 03:24:26 +0000 (0:00:02.665) 0:00:05.330 ****** 2026-02-17 03:24:30.300478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300566 | orchestrator | 2026-02-17 03:24:30.300572 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-17 03:24:30.300577 | orchestrator | Tuesday 17 February 2026 03:24:28 +0000 (0:00:02.537) 0:00:07.868 ****** 2026-02-17 03:24:30.300582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:30.300629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 03:24:36.739892 | orchestrator | 2026-02-17 03:24:36.867723 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-17 03:24:36.867793 | orchestrator | Tuesday 17 February 2026 03:24:30 +0000 (0:00:01.469) 0:00:09.338 ****** 2026-02-17 03:24:36.867807 | orchestrator | 2026-02-17 03:24:36.867819 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-17 03:24:36.867831 | orchestrator | Tuesday 17 February 2026 03:24:30 +0000 (0:00:00.065) 0:00:09.403 ****** 2026-02-17 03:24:36.867842 | orchestrator | 2026-02-17 03:24:36.867853 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-17 03:24:36.867864 | orchestrator | Tuesday 17 February 2026 03:24:30 +0000 (0:00:00.066) 0:00:09.469 ****** 2026-02-17 03:24:36.867876 | orchestrator | 2026-02-17 03:24:36.867886 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-17 03:24:36.867898 | orchestrator | Tuesday 17 February 2026 03:24:30 +0000 (0:00:00.065) 0:00:09.535 ****** 2026-02-17 03:24:36.867909 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:24:36.867921 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:24:36.867932 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:24:36.867943 | orchestrator | 2026-02-17 03:24:36.867955 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-17 03:24:36.867966 | orchestrator | Tuesday 17 February 2026 03:24:33 +0000 (0:00:02.942) 0:00:12.478 ****** 2026-02-17 03:24:36.868014 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:24:36.868027 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:24:36.868038 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:24:36.868049 | orchestrator | 2026-02-17 03:24:36.868061 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:24:36.868119 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:24:36.868132 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:24:36.868156 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:24:36.868167 | orchestrator | 2026-02-17 03:24:36.868178 | orchestrator | 2026-02-17 03:24:36.868189 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:24:36.868200 | orchestrator | Tuesday 17 February 2026 03:24:36 +0000 (0:00:03.233) 0:00:15.711 ****** 2026-02-17 03:24:36.868211 | orchestrator | =============================================================================== 2026-02-17 03:24:36.868222 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.23s 2026-02-17 03:24:36.868233 | orchestrator | redis : Restart redis container ----------------------------------------- 2.94s 2026-02-17 03:24:36.868244 | orchestrator | redis : Copying over default config.json files -------------------------- 2.67s 2026-02-17 03:24:36.868255 | orchestrator | redis : Copying over redis config files --------------------------------- 2.54s 2026-02-17 03:24:36.868266 | orchestrator | redis : Check redis containers ------------------------------------------ 1.47s 2026-02-17 03:24:36.868277 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.10s 2026-02-17 03:24:36.868288 | orchestrator | redis : include_tasks --------------------------------------------------- 0.51s 2026-02-17 03:24:36.868298 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-02-17 03:24:36.868309 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-17 03:24:36.868320 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-02-17 03:24:38.946275 | orchestrator | 2026-02-17 03:24:38 | INFO  | Task d144b05c-4e62-4120-936e-1fa382afac9a (mariadb) was prepared for execution. 2026-02-17 03:24:38.946368 | orchestrator | 2026-02-17 03:24:38 | INFO  | It takes a moment until task d144b05c-4e62-4120-936e-1fa382afac9a (mariadb) has been started and output is visible here. 2026-02-17 03:24:53.927485 | orchestrator | 2026-02-17 03:24:53.927593 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:24:53.927606 | orchestrator | 2026-02-17 03:24:53.927616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:24:53.927625 | orchestrator | Tuesday 17 February 2026 03:24:43 +0000 (0:00:00.168) 0:00:00.168 ****** 2026-02-17 03:24:53.927633 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:24:53.927643 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:24:53.927651 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:24:53.927659 | orchestrator | 2026-02-17 03:24:53.927667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:24:53.927676 | orchestrator | Tuesday 17 February 2026 03:24:43 +0000 (0:00:00.324) 0:00:00.492 ****** 2026-02-17 03:24:53.927685 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-17 03:24:53.927693 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-17 03:24:53.927701 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-17 03:24:53.927709 | orchestrator | 2026-02-17 03:24:53.927717 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-17 03:24:53.927726 | orchestrator | 2026-02-17 03:24:53.927734 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-17 03:24:53.927761 | orchestrator | Tuesday 17 February 2026 03:24:44 +0000 (0:00:00.607) 0:00:01.100 ****** 2026-02-17 03:24:53.927770 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 03:24:53.927778 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 03:24:53.927786 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 03:24:53.927794 | orchestrator | 2026-02-17 03:24:53.927802 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 03:24:53.927810 | orchestrator | Tuesday 17 February 2026 03:24:44 +0000 (0:00:00.377) 0:00:01.477 ****** 2026-02-17 03:24:53.927819 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:24:53.927828 | orchestrator | 2026-02-17 03:24:53.927836 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-17 03:24:53.927844 | orchestrator | Tuesday 17 February 2026 03:24:45 +0000 (0:00:00.557) 0:00:02.035 ****** 2026-02-17 03:24:53.927870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:24:53.927899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:24:53.927923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:24:53.927932 | orchestrator | 2026-02-17 03:24:53.927941 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-17 03:24:53.927949 | orchestrator | Tuesday 17 February 2026 03:24:48 +0000 (0:00:02.771) 0:00:04.806 ****** 2026-02-17 03:24:53.927957 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:24:53.927966 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:24:53.927991 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:24:53.928007 | orchestrator | 2026-02-17 03:24:53.928015 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-17 03:24:53.928023 | orchestrator | Tuesday 17 February 2026 03:24:48 +0000 (0:00:00.787) 0:00:05.593 ****** 2026-02-17 03:24:53.928032 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:24:53.928041 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:24:53.928050 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:24:53.928060 | orchestrator | 2026-02-17 03:24:53.928069 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-17 03:24:53.928132 | orchestrator | Tuesday 17 February 2026 03:24:50 +0000 (0:00:01.497) 0:00:07.091 ****** 2026-02-17 03:24:53.928151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:25:02.434775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:25:02.434877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:25:02.434913 | orchestrator | 2026-02-17 03:25:02.434926 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-17 03:25:02.434936 | orchestrator | Tuesday 17 February 2026 03:24:53 +0000 (0:00:03.506) 0:00:10.597 ****** 2026-02-17 03:25:02.434945 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:25:02.434955 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:25:02.434963 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:25:02.434972 | orchestrator | 2026-02-17 03:25:02.434980 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-17 03:25:02.435006 | orchestrator | Tuesday 17 February 2026 03:24:55 +0000 (0:00:01.176) 0:00:11.774 ****** 2026-02-17 03:25:02.435015 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:25:02.435024 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:25:02.435033 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:25:02.435044 | orchestrator | 2026-02-17 03:25:02.435054 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 03:25:02.435064 | orchestrator | Tuesday 17 February 2026 03:24:59 +0000 (0:00:04.196) 0:00:15.971 ****** 2026-02-17 03:25:02.435074 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:25:02.435132 | orchestrator | 2026-02-17 03:25:02.435142 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-17 03:25:02.435151 | orchestrator | Tuesday 17 February 2026 03:24:59 +0000 (0:00:00.586) 0:00:16.557 ****** 2026-02-17 03:25:02.435169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:02.435190 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:25:02.435211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:07.585586 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:25:07.585733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:07.585815 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:25:07.585832 | orchestrator | 2026-02-17 03:25:07.585847 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-17 03:25:07.585863 | orchestrator | Tuesday 17 February 2026 03:25:02 +0000 (0:00:02.548) 0:00:19.105 ****** 2026-02-17 03:25:07.585879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:07.585895 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:25:07.585941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:07.585971 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:25:07.585987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:07.586000 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:25:07.586012 | orchestrator | 2026-02-17 03:25:07.586117 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-17 03:25:07.586132 | orchestrator | Tuesday 17 February 2026 03:25:05 +0000 (0:00:02.627) 0:00:21.733 ****** 2026-02-17 03:25:07.586165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:10.408697 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:25:10.408849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:10.408875 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:25:10.408909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 03:25:10.408952 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:25:10.408965 | orchestrator | 2026-02-17 03:25:10.408979 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-17 03:25:10.408992 | orchestrator | Tuesday 17 February 2026 03:25:07 +0000 (0:00:02.527) 0:00:24.260 ****** 2026-02-17 03:25:10.409025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:25:10.409040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:25:10.409070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 03:27:29.523002 | orchestrator | 2026-02-17 03:27:29.523195 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-17 03:27:29.523221 | orchestrator | Tuesday 17 February 2026 03:25:10 +0000 (0:00:02.819) 0:00:27.079 ****** 2026-02-17 03:27:29.523241 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:29.523263 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:27:29.523281 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:27:29.523300 | orchestrator | 2026-02-17 03:27:29.523318 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-17 03:27:29.523335 | orchestrator | Tuesday 17 February 2026 03:25:11 +0000 (0:00:00.868) 0:00:27.948 ****** 2026-02-17 03:27:29.523353 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.523372 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:27:29.523390 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:27:29.523409 | orchestrator | 2026-02-17 03:27:29.523427 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-17 03:27:29.523443 | orchestrator | Tuesday 17 February 2026 03:25:11 +0000 (0:00:00.569) 0:00:28.517 ****** 2026-02-17 03:27:29.523460 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.523477 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:27:29.523494 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:27:29.523511 | orchestrator | 2026-02-17 03:27:29.523529 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-17 03:27:29.523548 | orchestrator | Tuesday 17 February 2026 03:25:12 +0000 (0:00:00.353) 0:00:28.870 ****** 2026-02-17 03:27:29.523568 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-17 03:27:29.523589 | orchestrator | ...ignoring 2026-02-17 03:27:29.523607 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-17 03:27:29.523626 | orchestrator | ...ignoring 2026-02-17 03:27:29.523645 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-17 03:27:29.523664 | orchestrator | ...ignoring 2026-02-17 03:27:29.523712 | orchestrator | 2026-02-17 03:27:29.523730 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-17 03:27:29.523749 | orchestrator | Tuesday 17 February 2026 03:25:23 +0000 (0:00:10.897) 0:00:39.768 ****** 2026-02-17 03:27:29.523767 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.523786 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:27:29.523804 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:27:29.523822 | orchestrator | 2026-02-17 03:27:29.523841 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-17 03:27:29.523859 | orchestrator | Tuesday 17 February 2026 03:25:23 +0000 (0:00:00.433) 0:00:40.201 ****** 2026-02-17 03:27:29.523878 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:29.523897 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:29.523915 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:29.523934 | orchestrator | 2026-02-17 03:27:29.523953 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-17 03:27:29.523971 | orchestrator | Tuesday 17 February 2026 03:25:24 +0000 (0:00:00.715) 0:00:40.917 ****** 2026-02-17 03:27:29.523990 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:29.524009 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:29.524028 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:29.524046 | orchestrator | 2026-02-17 03:27:29.524080 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-17 03:27:29.524099 | orchestrator | Tuesday 17 February 2026 03:25:24 +0000 (0:00:00.454) 0:00:41.371 ****** 2026-02-17 03:27:29.524118 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:29.524152 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:29.524169 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:29.524185 | orchestrator | 2026-02-17 03:27:29.524201 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-17 03:27:29.524218 | orchestrator | Tuesday 17 February 2026 03:25:25 +0000 (0:00:00.452) 0:00:41.824 ****** 2026-02-17 03:27:29.524234 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.524251 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:27:29.524268 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:27:29.524303 | orchestrator | 2026-02-17 03:27:29.524320 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-17 03:27:29.524351 | orchestrator | Tuesday 17 February 2026 03:25:25 +0000 (0:00:00.518) 0:00:42.342 ****** 2026-02-17 03:27:29.524369 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:29.524385 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:29.524402 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:29.524419 | orchestrator | 2026-02-17 03:27:29.524435 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 03:27:29.524452 | orchestrator | Tuesday 17 February 2026 03:25:26 +0000 (0:00:00.913) 0:00:43.255 ****** 2026-02-17 03:27:29.524468 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:29.524485 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:29.524501 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-17 03:27:29.524518 | orchestrator | 2026-02-17 03:27:29.524535 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-17 03:27:29.524552 | orchestrator | Tuesday 17 February 2026 03:25:26 +0000 (0:00:00.414) 0:00:43.670 ****** 2026-02-17 03:27:29.524568 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:29.524585 | orchestrator | 2026-02-17 03:27:29.524601 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-17 03:27:29.524618 | orchestrator | Tuesday 17 February 2026 03:25:37 +0000 (0:00:10.081) 0:00:53.751 ****** 2026-02-17 03:27:29.524634 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.524651 | orchestrator | 2026-02-17 03:27:29.524668 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 03:27:29.524686 | orchestrator | Tuesday 17 February 2026 03:25:37 +0000 (0:00:00.131) 0:00:53.883 ****** 2026-02-17 03:27:29.524703 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:29.524753 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:29.524767 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:29.524781 | orchestrator | 2026-02-17 03:27:29.524794 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-17 03:27:29.524808 | orchestrator | Tuesday 17 February 2026 03:25:38 +0000 (0:00:01.079) 0:00:54.962 ****** 2026-02-17 03:27:29.524822 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:29.524835 | orchestrator | 2026-02-17 03:27:29.524849 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-17 03:27:29.524863 | orchestrator | Tuesday 17 February 2026 03:25:46 +0000 (0:00:08.174) 0:01:03.136 ****** 2026-02-17 03:27:29.524876 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.524890 | orchestrator | 2026-02-17 03:27:29.524903 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-17 03:27:29.524917 | orchestrator | Tuesday 17 February 2026 03:25:49 +0000 (0:00:02.566) 0:01:05.702 ****** 2026-02-17 03:27:29.524930 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.524944 | orchestrator | 2026-02-17 03:27:29.524958 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-17 03:27:29.524972 | orchestrator | Tuesday 17 February 2026 03:25:51 +0000 (0:00:02.488) 0:01:08.191 ****** 2026-02-17 03:27:29.524985 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:29.524999 | orchestrator | 2026-02-17 03:27:29.525012 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-17 03:27:29.525026 | orchestrator | Tuesday 17 February 2026 03:25:51 +0000 (0:00:00.149) 0:01:08.340 ****** 2026-02-17 03:27:29.525038 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:29.525050 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:29.525063 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:29.525075 | orchestrator | 2026-02-17 03:27:29.525088 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-17 03:27:29.525101 | orchestrator | Tuesday 17 February 2026 03:25:51 +0000 (0:00:00.334) 0:01:08.674 ****** 2026-02-17 03:27:29.525114 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:29.525127 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-17 03:27:29.525199 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:27:29.525213 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:27:29.525226 | orchestrator | 2026-02-17 03:27:29.525240 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-17 03:27:29.525253 | orchestrator | skipping: no hosts matched 2026-02-17 03:27:29.525266 | orchestrator | 2026-02-17 03:27:29.525280 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-17 03:27:29.525293 | orchestrator | 2026-02-17 03:27:29.525306 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-17 03:27:29.525320 | orchestrator | Tuesday 17 February 2026 03:25:52 +0000 (0:00:00.565) 0:01:09.239 ****** 2026-02-17 03:27:29.525333 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:27:29.525346 | orchestrator | 2026-02-17 03:27:29.525360 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-17 03:27:29.525373 | orchestrator | Tuesday 17 February 2026 03:26:11 +0000 (0:00:19.080) 0:01:28.320 ****** 2026-02-17 03:27:29.525386 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:27:29.525399 | orchestrator | 2026-02-17 03:27:29.525412 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-17 03:27:29.525426 | orchestrator | Tuesday 17 February 2026 03:26:28 +0000 (0:00:16.614) 0:01:44.934 ****** 2026-02-17 03:27:29.525439 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:27:29.525452 | orchestrator | 2026-02-17 03:27:29.525470 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-17 03:27:29.525484 | orchestrator | 2026-02-17 03:27:29.525505 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-17 03:27:29.525519 | orchestrator | Tuesday 17 February 2026 03:26:30 +0000 (0:00:02.579) 0:01:47.514 ****** 2026-02-17 03:27:29.525542 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:27:29.525555 | orchestrator | 2026-02-17 03:27:29.525569 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-17 03:27:29.525582 | orchestrator | Tuesday 17 February 2026 03:26:54 +0000 (0:00:23.818) 0:02:11.332 ****** 2026-02-17 03:27:29.525595 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:27:29.525609 | orchestrator | 2026-02-17 03:27:29.525622 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-17 03:27:29.525636 | orchestrator | Tuesday 17 February 2026 03:27:06 +0000 (0:00:11.540) 0:02:22.873 ****** 2026-02-17 03:27:29.525649 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:27:29.525662 | orchestrator | 2026-02-17 03:27:29.525676 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-17 03:27:29.525689 | orchestrator | 2026-02-17 03:27:29.525703 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-17 03:27:29.525717 | orchestrator | Tuesday 17 February 2026 03:27:09 +0000 (0:00:02.870) 0:02:25.744 ****** 2026-02-17 03:27:29.525730 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:29.525743 | orchestrator | 2026-02-17 03:27:29.525757 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-17 03:27:29.525770 | orchestrator | Tuesday 17 February 2026 03:27:21 +0000 (0:00:12.338) 0:02:38.082 ****** 2026-02-17 03:27:29.525784 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.525797 | orchestrator | 2026-02-17 03:27:29.525811 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-17 03:27:29.525824 | orchestrator | Tuesday 17 February 2026 03:27:25 +0000 (0:00:04.594) 0:02:42.677 ****** 2026-02-17 03:27:29.525838 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:29.525851 | orchestrator | 2026-02-17 03:27:29.525864 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-17 03:27:29.525877 | orchestrator | 2026-02-17 03:27:29.525891 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-17 03:27:29.525905 | orchestrator | Tuesday 17 February 2026 03:27:28 +0000 (0:00:02.991) 0:02:45.669 ****** 2026-02-17 03:27:29.525918 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:27:29.525932 | orchestrator | 2026-02-17 03:27:29.525946 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-17 03:27:29.525967 | orchestrator | Tuesday 17 February 2026 03:27:29 +0000 (0:00:00.521) 0:02:46.190 ****** 2026-02-17 03:27:42.575117 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:42.575270 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:42.575281 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:42.575286 | orchestrator | 2026-02-17 03:27:42.575293 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-17 03:27:42.575299 | orchestrator | Tuesday 17 February 2026 03:27:31 +0000 (0:00:02.263) 0:02:48.454 ****** 2026-02-17 03:27:42.575304 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:42.575309 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:42.575314 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:42.575319 | orchestrator | 2026-02-17 03:27:42.575324 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-17 03:27:42.575328 | orchestrator | Tuesday 17 February 2026 03:27:34 +0000 (0:00:02.262) 0:02:50.716 ****** 2026-02-17 03:27:42.575333 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:42.575338 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:42.575343 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:42.575347 | orchestrator | 2026-02-17 03:27:42.575352 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-17 03:27:42.575356 | orchestrator | Tuesday 17 February 2026 03:27:36 +0000 (0:00:02.496) 0:02:53.213 ****** 2026-02-17 03:27:42.575361 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:42.575366 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:42.575370 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:27:42.575395 | orchestrator | 2026-02-17 03:27:42.575400 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-17 03:27:42.575404 | orchestrator | Tuesday 17 February 2026 03:27:38 +0000 (0:00:02.183) 0:02:55.397 ****** 2026-02-17 03:27:42.575409 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:27:42.575414 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:42.575419 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:27:42.575423 | orchestrator | 2026-02-17 03:27:42.575428 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-17 03:27:42.575433 | orchestrator | Tuesday 17 February 2026 03:27:41 +0000 (0:00:03.029) 0:02:58.426 ****** 2026-02-17 03:27:42.575437 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:42.575442 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:27:42.575446 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:27:42.575451 | orchestrator | 2026-02-17 03:27:42.575456 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:27:42.575461 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-17 03:27:42.575467 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-17 03:27:42.575472 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-17 03:27:42.575477 | orchestrator | 2026-02-17 03:27:42.575481 | orchestrator | 2026-02-17 03:27:42.575486 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:27:42.575490 | orchestrator | Tuesday 17 February 2026 03:27:42 +0000 (0:00:00.454) 0:02:58.881 ****** 2026-02-17 03:27:42.575495 | orchestrator | =============================================================================== 2026-02-17 03:27:42.575512 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.90s 2026-02-17 03:27:42.575517 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.16s 2026-02-17 03:27:42.575522 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.34s 2026-02-17 03:27:42.575526 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2026-02-17 03:27:42.575531 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.08s 2026-02-17 03:27:42.575535 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.17s 2026-02-17 03:27:42.575540 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.45s 2026-02-17 03:27:42.575544 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2026-02-17 03:27:42.575549 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.20s 2026-02-17 03:27:42.575554 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.51s 2026-02-17 03:27:42.575558 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.03s 2026-02-17 03:27:42.575563 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.99s 2026-02-17 03:27:42.575576 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.82s 2026-02-17 03:27:42.575580 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.77s 2026-02-17 03:27:42.575585 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.63s 2026-02-17 03:27:42.575591 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.57s 2026-02-17 03:27:42.575601 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.55s 2026-02-17 03:27:42.575605 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.53s 2026-02-17 03:27:42.575610 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.50s 2026-02-17 03:27:42.575620 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.49s 2026-02-17 03:27:45.098889 | orchestrator | 2026-02-17 03:27:45 | INFO  | Task a27e484d-1fb5-4b32-a9bc-b0c83ac13774 (rabbitmq) was prepared for execution. 2026-02-17 03:27:45.099007 | orchestrator | 2026-02-17 03:27:45 | INFO  | It takes a moment until task a27e484d-1fb5-4b32-a9bc-b0c83ac13774 (rabbitmq) has been started and output is visible here. 2026-02-17 03:27:58.864926 | orchestrator | 2026-02-17 03:27:58.865019 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:27:58.865028 | orchestrator | 2026-02-17 03:27:58.865035 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:27:58.865042 | orchestrator | Tuesday 17 February 2026 03:27:49 +0000 (0:00:00.196) 0:00:00.196 ****** 2026-02-17 03:27:58.865048 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:58.865056 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:27:58.865062 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:27:58.865068 | orchestrator | 2026-02-17 03:27:58.865074 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:27:58.865081 | orchestrator | Tuesday 17 February 2026 03:27:49 +0000 (0:00:00.304) 0:00:00.501 ****** 2026-02-17 03:27:58.865088 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-17 03:27:58.865095 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-17 03:27:58.865102 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-17 03:27:58.865108 | orchestrator | 2026-02-17 03:27:58.865115 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-17 03:27:58.865123 | orchestrator | 2026-02-17 03:27:58.865130 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-17 03:27:58.865136 | orchestrator | Tuesday 17 February 2026 03:27:50 +0000 (0:00:00.616) 0:00:01.118 ****** 2026-02-17 03:27:58.865190 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:27:58.865198 | orchestrator | 2026-02-17 03:27:58.865204 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-17 03:27:58.865211 | orchestrator | Tuesday 17 February 2026 03:27:50 +0000 (0:00:00.514) 0:00:01.633 ****** 2026-02-17 03:27:58.865217 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:58.865224 | orchestrator | 2026-02-17 03:27:58.865230 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-17 03:27:58.865237 | orchestrator | Tuesday 17 February 2026 03:27:51 +0000 (0:00:00.959) 0:00:02.592 ****** 2026-02-17 03:27:58.865243 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:58.865251 | orchestrator | 2026-02-17 03:27:58.865257 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-17 03:27:58.865264 | orchestrator | Tuesday 17 February 2026 03:27:52 +0000 (0:00:00.398) 0:00:02.990 ****** 2026-02-17 03:27:58.865271 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:58.865277 | orchestrator | 2026-02-17 03:27:58.865284 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-17 03:27:58.865290 | orchestrator | Tuesday 17 February 2026 03:27:52 +0000 (0:00:00.387) 0:00:03.378 ****** 2026-02-17 03:27:58.865296 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:58.865302 | orchestrator | 2026-02-17 03:27:58.865309 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-17 03:27:58.865315 | orchestrator | Tuesday 17 February 2026 03:27:53 +0000 (0:00:00.417) 0:00:03.796 ****** 2026-02-17 03:27:58.865322 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:58.865328 | orchestrator | 2026-02-17 03:27:58.865335 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-17 03:27:58.865341 | orchestrator | Tuesday 17 February 2026 03:27:53 +0000 (0:00:00.585) 0:00:04.382 ****** 2026-02-17 03:27:58.865363 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:27:58.865390 | orchestrator | 2026-02-17 03:27:58.865397 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-17 03:27:58.865403 | orchestrator | Tuesday 17 February 2026 03:27:54 +0000 (0:00:00.876) 0:00:05.258 ****** 2026-02-17 03:27:58.865410 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:27:58.865416 | orchestrator | 2026-02-17 03:27:58.865423 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-17 03:27:58.865430 | orchestrator | Tuesday 17 February 2026 03:27:55 +0000 (0:00:00.894) 0:00:06.153 ****** 2026-02-17 03:27:58.865436 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:58.865443 | orchestrator | 2026-02-17 03:27:58.865449 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-17 03:27:58.865456 | orchestrator | Tuesday 17 February 2026 03:27:55 +0000 (0:00:00.364) 0:00:06.517 ****** 2026-02-17 03:27:58.865463 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:27:58.865469 | orchestrator | 2026-02-17 03:27:58.865476 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-17 03:27:58.865483 | orchestrator | Tuesday 17 February 2026 03:27:56 +0000 (0:00:00.447) 0:00:06.965 ****** 2026-02-17 03:27:58.865511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:27:58.865521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:27:58.865534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:27:58.865547 | orchestrator | 2026-02-17 03:27:58.865554 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-17 03:27:58.865561 | orchestrator | Tuesday 17 February 2026 03:27:57 +0000 (0:00:00.875) 0:00:07.841 ****** 2026-02-17 03:27:58.865568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:27:58.865582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:28:17.701891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:28:17.702115 | orchestrator | 2026-02-17 03:28:17.702226 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-17 03:28:17.702292 | orchestrator | Tuesday 17 February 2026 03:27:58 +0000 (0:00:01.750) 0:00:09.592 ****** 2026-02-17 03:28:17.702315 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-17 03:28:17.702335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-17 03:28:17.702355 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-17 03:28:17.702375 | orchestrator | 2026-02-17 03:28:17.702395 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-17 03:28:17.702415 | orchestrator | Tuesday 17 February 2026 03:28:00 +0000 (0:00:01.538) 0:00:11.130 ****** 2026-02-17 03:28:17.702453 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-17 03:28:17.702474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-17 03:28:17.702493 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-17 03:28:17.702512 | orchestrator | 2026-02-17 03:28:17.702531 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-17 03:28:17.702549 | orchestrator | Tuesday 17 February 2026 03:28:02 +0000 (0:00:01.731) 0:00:12.862 ****** 2026-02-17 03:28:17.702566 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-17 03:28:17.702583 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-17 03:28:17.702601 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-17 03:28:17.702620 | orchestrator | 2026-02-17 03:28:17.702638 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-17 03:28:17.702656 | orchestrator | Tuesday 17 February 2026 03:28:03 +0000 (0:00:01.315) 0:00:14.177 ****** 2026-02-17 03:28:17.702673 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-17 03:28:17.702691 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-17 03:28:17.702710 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-17 03:28:17.702728 | orchestrator | 2026-02-17 03:28:17.702746 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-17 03:28:17.702757 | orchestrator | Tuesday 17 February 2026 03:28:05 +0000 (0:00:01.746) 0:00:15.924 ****** 2026-02-17 03:28:17.702768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-17 03:28:17.702779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-17 03:28:17.702790 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-17 03:28:17.702801 | orchestrator | 2026-02-17 03:28:17.702811 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-17 03:28:17.702823 | orchestrator | Tuesday 17 February 2026 03:28:06 +0000 (0:00:01.367) 0:00:17.292 ****** 2026-02-17 03:28:17.702834 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-17 03:28:17.702845 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-17 03:28:17.702856 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-17 03:28:17.702866 | orchestrator | 2026-02-17 03:28:17.702877 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-17 03:28:17.702888 | orchestrator | Tuesday 17 February 2026 03:28:07 +0000 (0:00:01.418) 0:00:18.710 ****** 2026-02-17 03:28:17.702899 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:28:17.702911 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:28:17.702946 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:28:17.702973 | orchestrator | 2026-02-17 03:28:17.702984 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-17 03:28:17.702995 | orchestrator | Tuesday 17 February 2026 03:28:08 +0000 (0:00:00.453) 0:00:19.164 ****** 2026-02-17 03:28:17.703010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:28:17.703032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:28:17.703047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 03:28:17.703059 | orchestrator | 2026-02-17 03:28:17.703071 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-17 03:28:17.703082 | orchestrator | Tuesday 17 February 2026 03:28:09 +0000 (0:00:01.220) 0:00:20.385 ****** 2026-02-17 03:28:17.703093 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:28:17.703104 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:28:17.703115 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:28:17.703126 | orchestrator | 2026-02-17 03:28:17.703137 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-17 03:28:17.703188 | orchestrator | Tuesday 17 February 2026 03:28:10 +0000 (0:00:00.795) 0:00:21.181 ****** 2026-02-17 03:28:17.703201 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:28:17.703212 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:28:17.703223 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:28:17.703234 | orchestrator | 2026-02-17 03:28:17.703245 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-17 03:28:17.703265 | orchestrator | Tuesday 17 February 2026 03:28:17 +0000 (0:00:07.249) 0:00:28.431 ****** 2026-02-17 03:29:51.884567 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:29:51.884679 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:29:51.884694 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:29:51.884706 | orchestrator | 2026-02-17 03:29:51.884719 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-17 03:29:51.884731 | orchestrator | 2026-02-17 03:29:51.884742 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-17 03:29:51.884778 | orchestrator | Tuesday 17 February 2026 03:28:18 +0000 (0:00:00.560) 0:00:28.991 ****** 2026-02-17 03:29:51.884803 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:29:51.884816 | orchestrator | 2026-02-17 03:29:51.884827 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-17 03:29:51.884838 | orchestrator | Tuesday 17 February 2026 03:28:18 +0000 (0:00:00.592) 0:00:29.584 ****** 2026-02-17 03:29:51.884850 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:29:51.884861 | orchestrator | 2026-02-17 03:29:51.884872 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-17 03:29:51.884883 | orchestrator | Tuesday 17 February 2026 03:28:19 +0000 (0:00:00.264) 0:00:29.848 ****** 2026-02-17 03:29:51.884894 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:29:51.884905 | orchestrator | 2026-02-17 03:29:51.884916 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-17 03:29:51.884927 | orchestrator | Tuesday 17 February 2026 03:28:20 +0000 (0:00:01.655) 0:00:31.504 ****** 2026-02-17 03:29:51.884938 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:29:51.884950 | orchestrator | 2026-02-17 03:29:51.884961 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-17 03:29:51.884971 | orchestrator | 2026-02-17 03:29:51.884982 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-17 03:29:51.884993 | orchestrator | Tuesday 17 February 2026 03:29:15 +0000 (0:00:54.680) 0:01:26.184 ****** 2026-02-17 03:29:51.885004 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:29:51.885015 | orchestrator | 2026-02-17 03:29:51.885026 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-17 03:29:51.885037 | orchestrator | Tuesday 17 February 2026 03:29:16 +0000 (0:00:00.667) 0:01:26.851 ****** 2026-02-17 03:29:51.885049 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:29:51.885062 | orchestrator | 2026-02-17 03:29:51.885074 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-17 03:29:51.885086 | orchestrator | Tuesday 17 February 2026 03:29:16 +0000 (0:00:00.231) 0:01:27.083 ****** 2026-02-17 03:29:51.885098 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:29:51.885111 | orchestrator | 2026-02-17 03:29:51.885125 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-17 03:29:51.885155 | orchestrator | Tuesday 17 February 2026 03:29:17 +0000 (0:00:01.631) 0:01:28.715 ****** 2026-02-17 03:29:51.885168 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:29:51.885181 | orchestrator | 2026-02-17 03:29:51.885252 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-17 03:29:51.885264 | orchestrator | 2026-02-17 03:29:51.885275 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-17 03:29:51.885286 | orchestrator | Tuesday 17 February 2026 03:29:32 +0000 (0:00:14.547) 0:01:43.262 ****** 2026-02-17 03:29:51.885297 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:29:51.885308 | orchestrator | 2026-02-17 03:29:51.885343 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-17 03:29:51.885355 | orchestrator | Tuesday 17 February 2026 03:29:33 +0000 (0:00:00.745) 0:01:44.008 ****** 2026-02-17 03:29:51.885366 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:29:51.885377 | orchestrator | 2026-02-17 03:29:51.885388 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-17 03:29:51.885400 | orchestrator | Tuesday 17 February 2026 03:29:33 +0000 (0:00:00.254) 0:01:44.263 ****** 2026-02-17 03:29:51.885410 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:29:51.885422 | orchestrator | 2026-02-17 03:29:51.885433 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-17 03:29:51.885444 | orchestrator | Tuesday 17 February 2026 03:29:40 +0000 (0:00:06.575) 0:01:50.838 ****** 2026-02-17 03:29:51.885455 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:29:51.885466 | orchestrator | 2026-02-17 03:29:51.885477 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-17 03:29:51.885489 | orchestrator | 2026-02-17 03:29:51.885500 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-17 03:29:51.885511 | orchestrator | Tuesday 17 February 2026 03:29:48 +0000 (0:00:08.585) 0:01:59.423 ****** 2026-02-17 03:29:51.885522 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:29:51.885533 | orchestrator | 2026-02-17 03:29:51.885544 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-17 03:29:51.885555 | orchestrator | Tuesday 17 February 2026 03:29:49 +0000 (0:00:00.558) 0:01:59.982 ****** 2026-02-17 03:29:51.885566 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-17 03:29:51.885576 | orchestrator | enable_outward_rabbitmq_True 2026-02-17 03:29:51.885588 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-17 03:29:51.885599 | orchestrator | outward_rabbitmq_restart 2026-02-17 03:29:51.885610 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:29:51.885621 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:29:51.885632 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:29:51.885643 | orchestrator | 2026-02-17 03:29:51.885654 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-17 03:29:51.885665 | orchestrator | skipping: no hosts matched 2026-02-17 03:29:51.885676 | orchestrator | 2026-02-17 03:29:51.885686 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-17 03:29:51.885697 | orchestrator | skipping: no hosts matched 2026-02-17 03:29:51.885708 | orchestrator | 2026-02-17 03:29:51.885719 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-17 03:29:51.885730 | orchestrator | skipping: no hosts matched 2026-02-17 03:29:51.885741 | orchestrator | 2026-02-17 03:29:51.885752 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:29:51.885796 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-17 03:29:51.885810 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:29:51.885821 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:29:51.885832 | orchestrator | 2026-02-17 03:29:51.885843 | orchestrator | 2026-02-17 03:29:51.885855 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:29:51.885866 | orchestrator | Tuesday 17 February 2026 03:29:51 +0000 (0:00:02.243) 0:02:02.225 ****** 2026-02-17 03:29:51.885877 | orchestrator | =============================================================================== 2026-02-17 03:29:51.885888 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.81s 2026-02-17 03:29:51.885899 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.86s 2026-02-17 03:29:51.885919 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.25s 2026-02-17 03:29:51.885930 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.24s 2026-02-17 03:29:51.885941 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.01s 2026-02-17 03:29:51.885952 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.75s 2026-02-17 03:29:51.885963 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.75s 2026-02-17 03:29:51.885974 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.73s 2026-02-17 03:29:51.885985 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.54s 2026-02-17 03:29:51.885996 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.42s 2026-02-17 03:29:51.886007 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.37s 2026-02-17 03:29:51.886104 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.32s 2026-02-17 03:29:51.886117 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.22s 2026-02-17 03:29:51.886128 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-02-17 03:29:51.886146 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.89s 2026-02-17 03:29:51.886158 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.88s 2026-02-17 03:29:51.886169 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.88s 2026-02-17 03:29:51.886180 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.80s 2026-02-17 03:29:51.886215 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.75s 2026-02-17 03:29:51.886227 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-02-17 03:29:54.499711 | orchestrator | 2026-02-17 03:29:54 | INFO  | Task cc93f79c-9f70-48d8-89b5-62e7ad6a392a (openvswitch) was prepared for execution. 2026-02-17 03:29:54.500593 | orchestrator | 2026-02-17 03:29:54 | INFO  | It takes a moment until task cc93f79c-9f70-48d8-89b5-62e7ad6a392a (openvswitch) has been started and output is visible here. 2026-02-17 03:30:07.866868 | orchestrator | 2026-02-17 03:30:07.868393 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:30:07.868414 | orchestrator | 2026-02-17 03:30:07.868421 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:30:07.868427 | orchestrator | Tuesday 17 February 2026 03:29:59 +0000 (0:00:00.328) 0:00:00.328 ****** 2026-02-17 03:30:07.868434 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:30:07.868441 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:30:07.868447 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:30:07.868453 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:30:07.868459 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:30:07.868464 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:30:07.868470 | orchestrator | 2026-02-17 03:30:07.868479 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:30:07.868488 | orchestrator | Tuesday 17 February 2026 03:29:59 +0000 (0:00:00.743) 0:00:01.071 ****** 2026-02-17 03:30:07.868498 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 03:30:07.868508 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 03:30:07.868517 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 03:30:07.868526 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 03:30:07.868534 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 03:30:07.868542 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 03:30:07.868587 | orchestrator | 2026-02-17 03:30:07.868597 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-17 03:30:07.868605 | orchestrator | 2026-02-17 03:30:07.868614 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-17 03:30:07.868624 | orchestrator | Tuesday 17 February 2026 03:30:00 +0000 (0:00:00.610) 0:00:01.682 ****** 2026-02-17 03:30:07.868634 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:30:07.868645 | orchestrator | 2026-02-17 03:30:07.868653 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-17 03:30:07.868662 | orchestrator | Tuesday 17 February 2026 03:30:01 +0000 (0:00:01.213) 0:00:02.896 ****** 2026-02-17 03:30:07.868671 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-17 03:30:07.868681 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-17 03:30:07.868689 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-17 03:30:07.868698 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-17 03:30:07.868707 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-17 03:30:07.868716 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-17 03:30:07.868724 | orchestrator | 2026-02-17 03:30:07.868733 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-17 03:30:07.868741 | orchestrator | Tuesday 17 February 2026 03:30:02 +0000 (0:00:01.247) 0:00:04.144 ****** 2026-02-17 03:30:07.868750 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-17 03:30:07.868758 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-17 03:30:07.868768 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-17 03:30:07.868777 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-17 03:30:07.868785 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-17 03:30:07.868795 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-17 03:30:07.868804 | orchestrator | 2026-02-17 03:30:07.868811 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-17 03:30:07.868816 | orchestrator | Tuesday 17 February 2026 03:30:04 +0000 (0:00:01.505) 0:00:05.650 ****** 2026-02-17 03:30:07.868821 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-17 03:30:07.868826 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:30:07.868832 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-17 03:30:07.868836 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:30:07.868841 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-17 03:30:07.868846 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:30:07.868851 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-17 03:30:07.868856 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:30:07.868861 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-17 03:30:07.868866 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:30:07.868870 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-17 03:30:07.868876 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:30:07.868880 | orchestrator | 2026-02-17 03:30:07.868885 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-17 03:30:07.868890 | orchestrator | Tuesday 17 February 2026 03:30:05 +0000 (0:00:01.291) 0:00:06.941 ****** 2026-02-17 03:30:07.868895 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:30:07.868900 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:30:07.868905 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:30:07.868910 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:30:07.868914 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:30:07.868919 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:30:07.868924 | orchestrator | 2026-02-17 03:30:07.868929 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-17 03:30:07.868941 | orchestrator | Tuesday 17 February 2026 03:30:06 +0000 (0:00:00.772) 0:00:07.714 ****** 2026-02-17 03:30:07.868970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:07.868980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:07.868986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:07.869025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:07.869038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:07.869073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572422 | orchestrator | 2026-02-17 03:30:10.572435 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-17 03:30:10.572449 | orchestrator | Tuesday 17 February 2026 03:30:07 +0000 (0:00:01.433) 0:00:09.147 ****** 2026-02-17 03:30:10.572461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:10.572541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389669 | orchestrator | 2026-02-17 03:30:13.389683 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-17 03:30:13.389698 | orchestrator | Tuesday 17 February 2026 03:30:10 +0000 (0:00:02.707) 0:00:11.855 ****** 2026-02-17 03:30:13.389710 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:30:13.389724 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:30:13.389737 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:30:13.389749 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:30:13.389761 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:30:13.389773 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:30:13.389786 | orchestrator | 2026-02-17 03:30:13.389800 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-17 03:30:13.389813 | orchestrator | Tuesday 17 February 2026 03:30:11 +0000 (0:00:01.045) 0:00:12.901 ****** 2026-02-17 03:30:13.389826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:13.389910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:38.523666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 03:30:38.523783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:38.523799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:38.523853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:38.523867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:38.523896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:38.523908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 03:30:38.523920 | orchestrator | 2026-02-17 03:30:38.523934 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 03:30:38.523946 | orchestrator | Tuesday 17 February 2026 03:30:13 +0000 (0:00:01.766) 0:00:14.667 ****** 2026-02-17 03:30:38.523957 | orchestrator | 2026-02-17 03:30:38.523969 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 03:30:38.523980 | orchestrator | Tuesday 17 February 2026 03:30:13 +0000 (0:00:00.312) 0:00:14.980 ****** 2026-02-17 03:30:38.523999 | orchestrator | 2026-02-17 03:30:38.524010 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 03:30:38.524021 | orchestrator | Tuesday 17 February 2026 03:30:13 +0000 (0:00:00.132) 0:00:15.113 ****** 2026-02-17 03:30:38.524032 | orchestrator | 2026-02-17 03:30:38.524043 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 03:30:38.524054 | orchestrator | Tuesday 17 February 2026 03:30:14 +0000 (0:00:00.125) 0:00:15.238 ****** 2026-02-17 03:30:38.524065 | orchestrator | 2026-02-17 03:30:38.524076 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 03:30:38.524086 | orchestrator | Tuesday 17 February 2026 03:30:14 +0000 (0:00:00.139) 0:00:15.378 ****** 2026-02-17 03:30:38.524097 | orchestrator | 2026-02-17 03:30:38.524108 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 03:30:38.524123 | orchestrator | Tuesday 17 February 2026 03:30:14 +0000 (0:00:00.135) 0:00:15.514 ****** 2026-02-17 03:30:38.524142 | orchestrator | 2026-02-17 03:30:38.524159 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-17 03:30:38.524176 | orchestrator | Tuesday 17 February 2026 03:30:14 +0000 (0:00:00.142) 0:00:15.656 ****** 2026-02-17 03:30:38.524195 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:30:38.524247 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:30:38.524267 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:30:38.524285 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:30:38.524303 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:30:38.524319 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:30:38.524336 | orchestrator | 2026-02-17 03:30:38.524355 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-17 03:30:38.524375 | orchestrator | Tuesday 17 February 2026 03:30:23 +0000 (0:00:08.718) 0:00:24.374 ****** 2026-02-17 03:30:38.524399 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:30:38.524420 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:30:38.524435 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:30:38.524450 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:30:38.524466 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:30:38.524482 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:30:38.524498 | orchestrator | 2026-02-17 03:30:38.524514 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-17 03:30:38.524532 | orchestrator | Tuesday 17 February 2026 03:30:24 +0000 (0:00:01.102) 0:00:25.476 ****** 2026-02-17 03:30:38.524549 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:30:38.524566 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:30:38.524583 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:30:38.524601 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:30:38.524619 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:30:38.524636 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:30:38.524655 | orchestrator | 2026-02-17 03:30:38.524673 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-17 03:30:38.524691 | orchestrator | Tuesday 17 February 2026 03:30:32 +0000 (0:00:08.076) 0:00:33.553 ****** 2026-02-17 03:30:38.524710 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-17 03:30:38.524728 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-17 03:30:38.524745 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-17 03:30:38.524762 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-17 03:30:38.524782 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-17 03:30:38.524801 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-17 03:30:38.524819 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-17 03:30:38.524870 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-17 03:30:51.499426 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-17 03:30:51.499518 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-17 03:30:51.499531 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-17 03:30:51.499541 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-17 03:30:51.499551 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 03:30:51.499558 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 03:30:51.499566 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 03:30:51.499574 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 03:30:51.499582 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 03:30:51.499589 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 03:30:51.499598 | orchestrator | 2026-02-17 03:30:51.499606 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-17 03:30:51.499616 | orchestrator | Tuesday 17 February 2026 03:30:38 +0000 (0:00:06.159) 0:00:39.712 ****** 2026-02-17 03:30:51.499625 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-17 03:30:51.499632 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:30:51.499641 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-17 03:30:51.499650 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:30:51.499655 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-17 03:30:51.499659 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:30:51.499664 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-17 03:30:51.499669 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-17 03:30:51.499673 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-17 03:30:51.499678 | orchestrator | 2026-02-17 03:30:51.499682 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-17 03:30:51.499687 | orchestrator | Tuesday 17 February 2026 03:30:40 +0000 (0:00:02.427) 0:00:42.140 ****** 2026-02-17 03:30:51.499692 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-17 03:30:51.499697 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:30:51.499704 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-17 03:30:51.499711 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:30:51.499718 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-17 03:30:51.499726 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:30:51.499733 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-17 03:30:51.499742 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-17 03:30:51.499764 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-17 03:30:51.499773 | orchestrator | 2026-02-17 03:30:51.499778 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-17 03:30:51.499783 | orchestrator | Tuesday 17 February 2026 03:30:43 +0000 (0:00:03.055) 0:00:45.195 ****** 2026-02-17 03:30:51.499787 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:30:51.499792 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:30:51.499813 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:30:51.499818 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:30:51.499822 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:30:51.499827 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:30:51.499831 | orchestrator | 2026-02-17 03:30:51.499836 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:30:51.499842 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 03:30:51.499848 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 03:30:51.499853 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 03:30:51.499861 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 03:30:51.499868 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 03:30:51.499876 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 03:30:51.499884 | orchestrator | 2026-02-17 03:30:51.499892 | orchestrator | 2026-02-17 03:30:51.499900 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:30:51.499908 | orchestrator | Tuesday 17 February 2026 03:30:51 +0000 (0:00:07.085) 0:00:52.281 ****** 2026-02-17 03:30:51.499930 | orchestrator | =============================================================================== 2026-02-17 03:30:51.499935 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.16s 2026-02-17 03:30:51.499940 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.72s 2026-02-17 03:30:51.499945 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.16s 2026-02-17 03:30:51.499953 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.06s 2026-02-17 03:30:51.499960 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.71s 2026-02-17 03:30:51.499969 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.43s 2026-02-17 03:30:51.499977 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.77s 2026-02-17 03:30:51.499985 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.51s 2026-02-17 03:30:51.499993 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.43s 2026-02-17 03:30:51.500002 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.29s 2026-02-17 03:30:51.500011 | orchestrator | module-load : Load modules ---------------------------------------------- 1.25s 2026-02-17 03:30:51.500017 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.21s 2026-02-17 03:30:51.500023 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.10s 2026-02-17 03:30:51.500030 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.05s 2026-02-17 03:30:51.500038 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.99s 2026-02-17 03:30:51.500046 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.77s 2026-02-17 03:30:51.500054 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2026-02-17 03:30:51.500062 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-02-17 03:30:54.069130 | orchestrator | 2026-02-17 03:30:54 | INFO  | Task 51ae71a9-87a8-441b-a6b4-3692a7c9f96e (ovn) was prepared for execution. 2026-02-17 03:30:54.069212 | orchestrator | 2026-02-17 03:30:54 | INFO  | It takes a moment until task 51ae71a9-87a8-441b-a6b4-3692a7c9f96e (ovn) has been started and output is visible here. 2026-02-17 03:31:05.109252 | orchestrator | 2026-02-17 03:31:05.109375 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:31:05.109398 | orchestrator | 2026-02-17 03:31:05.109408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:31:05.109417 | orchestrator | Tuesday 17 February 2026 03:30:58 +0000 (0:00:00.169) 0:00:00.169 ****** 2026-02-17 03:31:05.109427 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:31:05.109437 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:31:05.109446 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:31:05.109455 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:31:05.109463 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:31:05.109472 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:31:05.109481 | orchestrator | 2026-02-17 03:31:05.109491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:31:05.109500 | orchestrator | Tuesday 17 February 2026 03:30:59 +0000 (0:00:00.759) 0:00:00.929 ****** 2026-02-17 03:31:05.109523 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-17 03:31:05.109533 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-17 03:31:05.109541 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-17 03:31:05.109550 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-17 03:31:05.109559 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-17 03:31:05.109568 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-17 03:31:05.109576 | orchestrator | 2026-02-17 03:31:05.109586 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-17 03:31:05.109595 | orchestrator | 2026-02-17 03:31:05.109604 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-17 03:31:05.109612 | orchestrator | Tuesday 17 February 2026 03:31:00 +0000 (0:00:00.875) 0:00:01.805 ****** 2026-02-17 03:31:05.109622 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:31:05.109632 | orchestrator | 2026-02-17 03:31:05.109641 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-17 03:31:05.109650 | orchestrator | Tuesday 17 February 2026 03:31:01 +0000 (0:00:01.172) 0:00:02.977 ****** 2026-02-17 03:31:05.109660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109772 | orchestrator | 2026-02-17 03:31:05.109793 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-17 03:31:05.109808 | orchestrator | Tuesday 17 February 2026 03:31:02 +0000 (0:00:01.134) 0:00:04.111 ****** 2026-02-17 03:31:05.109831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.109937 | orchestrator | 2026-02-17 03:31:05.109968 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-17 03:31:05.109996 | orchestrator | Tuesday 17 February 2026 03:31:03 +0000 (0:00:01.483) 0:00:05.595 ****** 2026-02-17 03:31:05.110011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.110088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:05.110157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.190800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191056 | orchestrator | 2026-02-17 03:31:29.191077 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-17 03:31:29.191097 | orchestrator | Tuesday 17 February 2026 03:31:05 +0000 (0:00:01.179) 0:00:06.774 ****** 2026-02-17 03:31:29.191118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191339 | orchestrator | 2026-02-17 03:31:29.191356 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-17 03:31:29.191370 | orchestrator | Tuesday 17 February 2026 03:31:06 +0000 (0:00:01.517) 0:00:08.292 ****** 2026-02-17 03:31:29.191393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191407 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:31:29.191516 | orchestrator | 2026-02-17 03:31:29.191536 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-17 03:31:29.191557 | orchestrator | Tuesday 17 February 2026 03:31:07 +0000 (0:00:01.372) 0:00:09.665 ****** 2026-02-17 03:31:29.191577 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:31:29.191594 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:31:29.191605 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:31:29.191616 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:31:29.191627 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:31:29.191638 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:31:29.191648 | orchestrator | 2026-02-17 03:31:29.191659 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-17 03:31:29.191670 | orchestrator | Tuesday 17 February 2026 03:31:10 +0000 (0:00:02.434) 0:00:12.099 ****** 2026-02-17 03:31:29.191681 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-17 03:31:29.191693 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-17 03:31:29.191704 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-17 03:31:29.191714 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-17 03:31:29.191725 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-17 03:31:29.191736 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-17 03:31:29.191756 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 03:32:09.492956 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 03:32:09.493078 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 03:32:09.493115 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 03:32:09.493129 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 03:32:09.493143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 03:32:09.493157 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-17 03:32:09.493173 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-17 03:32:09.493260 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-17 03:32:09.493275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-17 03:32:09.493289 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-17 03:32:09.493302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-17 03:32:09.493317 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 03:32:09.493350 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 03:32:09.493363 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 03:32:09.493376 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 03:32:09.493390 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 03:32:09.493403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 03:32:09.493416 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 03:32:09.493429 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 03:32:09.493443 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 03:32:09.493456 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 03:32:09.493468 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 03:32:09.493480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 03:32:09.493493 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 03:32:09.493507 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 03:32:09.493521 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 03:32:09.493533 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 03:32:09.493546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 03:32:09.493559 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 03:32:09.493571 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-17 03:32:09.493586 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-17 03:32:09.493599 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-17 03:32:09.493613 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-17 03:32:09.493628 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-17 03:32:09.493641 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-17 03:32:09.493653 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-17 03:32:09.493704 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-17 03:32:09.493720 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-17 03:32:09.493743 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-17 03:32:09.493758 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-17 03:32:09.493770 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-17 03:32:09.493874 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-17 03:32:09.493889 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-17 03:32:09.493901 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-17 03:32:09.493913 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-17 03:32:09.493927 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-17 03:32:09.493940 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-17 03:32:09.493954 | orchestrator | 2026-02-17 03:32:09.493968 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 03:32:09.493981 | orchestrator | Tuesday 17 February 2026 03:31:28 +0000 (0:00:18.191) 0:00:30.290 ****** 2026-02-17 03:32:09.493993 | orchestrator | 2026-02-17 03:32:09.494006 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 03:32:09.494082 | orchestrator | Tuesday 17 February 2026 03:31:28 +0000 (0:00:00.231) 0:00:30.521 ****** 2026-02-17 03:32:09.494096 | orchestrator | 2026-02-17 03:32:09.494109 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 03:32:09.494122 | orchestrator | Tuesday 17 February 2026 03:31:28 +0000 (0:00:00.063) 0:00:30.585 ****** 2026-02-17 03:32:09.494135 | orchestrator | 2026-02-17 03:32:09.494149 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 03:32:09.494161 | orchestrator | Tuesday 17 February 2026 03:31:28 +0000 (0:00:00.064) 0:00:30.649 ****** 2026-02-17 03:32:09.494174 | orchestrator | 2026-02-17 03:32:09.494185 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 03:32:09.494192 | orchestrator | Tuesday 17 February 2026 03:31:29 +0000 (0:00:00.065) 0:00:30.714 ****** 2026-02-17 03:32:09.494305 | orchestrator | 2026-02-17 03:32:09.494318 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 03:32:09.494326 | orchestrator | Tuesday 17 February 2026 03:31:29 +0000 (0:00:00.067) 0:00:30.781 ****** 2026-02-17 03:32:09.494334 | orchestrator | 2026-02-17 03:32:09.494342 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-17 03:32:09.494350 | orchestrator | Tuesday 17 February 2026 03:31:29 +0000 (0:00:00.063) 0:00:30.845 ****** 2026-02-17 03:32:09.494358 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:32:09.494367 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:32:09.494375 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:32:09.494383 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:09.494391 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:09.494399 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:09.494407 | orchestrator | 2026-02-17 03:32:09.494415 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-17 03:32:09.494423 | orchestrator | Tuesday 17 February 2026 03:31:30 +0000 (0:00:01.626) 0:00:32.472 ****** 2026-02-17 03:32:09.494445 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:32:09.494454 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:32:09.494462 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:32:09.494470 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:32:09.494477 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:32:09.494485 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:32:09.494493 | orchestrator | 2026-02-17 03:32:09.494501 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-17 03:32:09.494510 | orchestrator | 2026-02-17 03:32:09.494517 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-17 03:32:09.494526 | orchestrator | Tuesday 17 February 2026 03:32:07 +0000 (0:00:36.351) 0:01:08.823 ****** 2026-02-17 03:32:09.494534 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:32:09.494542 | orchestrator | 2026-02-17 03:32:09.494550 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-17 03:32:09.494558 | orchestrator | Tuesday 17 February 2026 03:32:07 +0000 (0:00:00.790) 0:01:09.613 ****** 2026-02-17 03:32:09.494566 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:32:09.494574 | orchestrator | 2026-02-17 03:32:09.494582 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-17 03:32:09.494590 | orchestrator | Tuesday 17 February 2026 03:32:08 +0000 (0:00:00.574) 0:01:10.188 ****** 2026-02-17 03:32:09.494598 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:09.494606 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:09.494614 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:09.494622 | orchestrator | 2026-02-17 03:32:09.494630 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-17 03:32:09.494653 | orchestrator | Tuesday 17 February 2026 03:32:09 +0000 (0:00:00.968) 0:01:11.157 ****** 2026-02-17 03:32:20.985468 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:20.985561 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:20.985572 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:20.985581 | orchestrator | 2026-02-17 03:32:20.985590 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-17 03:32:20.985612 | orchestrator | Tuesday 17 February 2026 03:32:09 +0000 (0:00:00.335) 0:01:11.492 ****** 2026-02-17 03:32:20.985620 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:20.985627 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:20.985635 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:20.985642 | orchestrator | 2026-02-17 03:32:20.985650 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-17 03:32:20.985670 | orchestrator | Tuesday 17 February 2026 03:32:10 +0000 (0:00:00.346) 0:01:11.839 ****** 2026-02-17 03:32:20.985678 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:20.985685 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:20.985700 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:20.985708 | orchestrator | 2026-02-17 03:32:20.985715 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-17 03:32:20.985722 | orchestrator | Tuesday 17 February 2026 03:32:10 +0000 (0:00:00.336) 0:01:12.176 ****** 2026-02-17 03:32:20.985730 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:20.985737 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:20.985744 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:20.985751 | orchestrator | 2026-02-17 03:32:20.985759 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-17 03:32:20.985766 | orchestrator | Tuesday 17 February 2026 03:32:11 +0000 (0:00:00.521) 0:01:12.697 ****** 2026-02-17 03:32:20.985774 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.985783 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.985790 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.985797 | orchestrator | 2026-02-17 03:32:20.985805 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-17 03:32:20.985829 | orchestrator | Tuesday 17 February 2026 03:32:11 +0000 (0:00:00.308) 0:01:13.005 ****** 2026-02-17 03:32:20.985837 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.985844 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.985851 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.985859 | orchestrator | 2026-02-17 03:32:20.985866 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-17 03:32:20.985873 | orchestrator | Tuesday 17 February 2026 03:32:11 +0000 (0:00:00.346) 0:01:13.352 ****** 2026-02-17 03:32:20.985880 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.985888 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.985895 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.985902 | orchestrator | 2026-02-17 03:32:20.985909 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-17 03:32:20.985917 | orchestrator | Tuesday 17 February 2026 03:32:11 +0000 (0:00:00.315) 0:01:13.668 ****** 2026-02-17 03:32:20.985924 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.985931 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.985939 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.985946 | orchestrator | 2026-02-17 03:32:20.985953 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-17 03:32:20.985960 | orchestrator | Tuesday 17 February 2026 03:32:12 +0000 (0:00:00.355) 0:01:14.023 ****** 2026-02-17 03:32:20.985968 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.985975 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.985982 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.985990 | orchestrator | 2026-02-17 03:32:20.985997 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-17 03:32:20.986004 | orchestrator | Tuesday 17 February 2026 03:32:12 +0000 (0:00:00.518) 0:01:14.542 ****** 2026-02-17 03:32:20.986011 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986062 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986071 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986079 | orchestrator | 2026-02-17 03:32:20.986088 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-17 03:32:20.986096 | orchestrator | Tuesday 17 February 2026 03:32:13 +0000 (0:00:00.305) 0:01:14.847 ****** 2026-02-17 03:32:20.986104 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986112 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986120 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986129 | orchestrator | 2026-02-17 03:32:20.986137 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-17 03:32:20.986152 | orchestrator | Tuesday 17 February 2026 03:32:13 +0000 (0:00:00.321) 0:01:15.168 ****** 2026-02-17 03:32:20.986161 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986170 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986178 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986186 | orchestrator | 2026-02-17 03:32:20.986194 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-17 03:32:20.986218 | orchestrator | Tuesday 17 February 2026 03:32:13 +0000 (0:00:00.301) 0:01:15.470 ****** 2026-02-17 03:32:20.986227 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986235 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986244 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986252 | orchestrator | 2026-02-17 03:32:20.986260 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-17 03:32:20.986268 | orchestrator | Tuesday 17 February 2026 03:32:14 +0000 (0:00:00.535) 0:01:16.005 ****** 2026-02-17 03:32:20.986277 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986285 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986293 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986301 | orchestrator | 2026-02-17 03:32:20.986309 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-17 03:32:20.986324 | orchestrator | Tuesday 17 February 2026 03:32:14 +0000 (0:00:00.336) 0:01:16.342 ****** 2026-02-17 03:32:20.986333 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986341 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986349 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986358 | orchestrator | 2026-02-17 03:32:20.986366 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-17 03:32:20.986375 | orchestrator | Tuesday 17 February 2026 03:32:14 +0000 (0:00:00.308) 0:01:16.650 ****** 2026-02-17 03:32:20.986396 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986404 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986411 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986418 | orchestrator | 2026-02-17 03:32:20.986426 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-17 03:32:20.986438 | orchestrator | Tuesday 17 February 2026 03:32:15 +0000 (0:00:00.330) 0:01:16.981 ****** 2026-02-17 03:32:20.986446 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:32:20.986453 | orchestrator | 2026-02-17 03:32:20.986461 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-17 03:32:20.986468 | orchestrator | Tuesday 17 February 2026 03:32:16 +0000 (0:00:00.804) 0:01:17.785 ****** 2026-02-17 03:32:20.986475 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:20.986482 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:20.986490 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:20.986497 | orchestrator | 2026-02-17 03:32:20.986504 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-17 03:32:20.986511 | orchestrator | Tuesday 17 February 2026 03:32:16 +0000 (0:00:00.454) 0:01:18.239 ****** 2026-02-17 03:32:20.986518 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:20.986526 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:20.986533 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:20.986540 | orchestrator | 2026-02-17 03:32:20.986547 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-17 03:32:20.986554 | orchestrator | Tuesday 17 February 2026 03:32:17 +0000 (0:00:00.450) 0:01:18.689 ****** 2026-02-17 03:32:20.986561 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986569 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986576 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986583 | orchestrator | 2026-02-17 03:32:20.986590 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-17 03:32:20.986598 | orchestrator | Tuesday 17 February 2026 03:32:17 +0000 (0:00:00.340) 0:01:19.030 ****** 2026-02-17 03:32:20.986605 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986612 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986619 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986626 | orchestrator | 2026-02-17 03:32:20.986634 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-17 03:32:20.986641 | orchestrator | Tuesday 17 February 2026 03:32:17 +0000 (0:00:00.608) 0:01:19.639 ****** 2026-02-17 03:32:20.986648 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986655 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986663 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986670 | orchestrator | 2026-02-17 03:32:20.986677 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-17 03:32:20.986684 | orchestrator | Tuesday 17 February 2026 03:32:18 +0000 (0:00:00.351) 0:01:19.990 ****** 2026-02-17 03:32:20.986692 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986699 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986706 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986713 | orchestrator | 2026-02-17 03:32:20.986721 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-17 03:32:20.986728 | orchestrator | Tuesday 17 February 2026 03:32:18 +0000 (0:00:00.359) 0:01:20.349 ****** 2026-02-17 03:32:20.986743 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986751 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986758 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986765 | orchestrator | 2026-02-17 03:32:20.986773 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-17 03:32:20.986780 | orchestrator | Tuesday 17 February 2026 03:32:19 +0000 (0:00:00.334) 0:01:20.683 ****** 2026-02-17 03:32:20.986787 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:20.986794 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:20.986801 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:20.986809 | orchestrator | 2026-02-17 03:32:20.986816 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-17 03:32:20.986823 | orchestrator | Tuesday 17 February 2026 03:32:19 +0000 (0:00:00.563) 0:01:21.247 ****** 2026-02-17 03:32:20.986832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:20.986842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:20.986850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:20.986866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.380679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.380791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.380817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.380850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.380904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.380925 | orchestrator | 2026-02-17 03:32:27.380946 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-17 03:32:27.380964 | orchestrator | Tuesday 17 February 2026 03:32:20 +0000 (0:00:01.405) 0:01:22.652 ****** 2026-02-17 03:32:27.380985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381200 | orchestrator | 2026-02-17 03:32:27.381247 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-17 03:32:27.381261 | orchestrator | Tuesday 17 February 2026 03:32:24 +0000 (0:00:03.852) 0:01:26.505 ****** 2026-02-17 03:32:27.381273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:27.381354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.038127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.038322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.038342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.038355 | orchestrator | 2026-02-17 03:32:46.038368 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 03:32:46.038380 | orchestrator | Tuesday 17 February 2026 03:32:26 +0000 (0:00:02.110) 0:01:28.616 ****** 2026-02-17 03:32:46.038392 | orchestrator | 2026-02-17 03:32:46.038403 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 03:32:46.038414 | orchestrator | Tuesday 17 February 2026 03:32:27 +0000 (0:00:00.066) 0:01:28.682 ****** 2026-02-17 03:32:46.038437 | orchestrator | 2026-02-17 03:32:46.038448 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 03:32:46.038459 | orchestrator | Tuesday 17 February 2026 03:32:27 +0000 (0:00:00.279) 0:01:28.962 ****** 2026-02-17 03:32:46.038470 | orchestrator | 2026-02-17 03:32:46.038481 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-17 03:32:46.038492 | orchestrator | Tuesday 17 February 2026 03:32:27 +0000 (0:00:00.082) 0:01:29.045 ****** 2026-02-17 03:32:46.038503 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:32:46.038516 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:32:46.038527 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:32:46.038538 | orchestrator | 2026-02-17 03:32:46.038549 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-17 03:32:46.038560 | orchestrator | Tuesday 17 February 2026 03:32:34 +0000 (0:00:06.650) 0:01:35.696 ****** 2026-02-17 03:32:46.038572 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:32:46.038583 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:32:46.038597 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:32:46.038610 | orchestrator | 2026-02-17 03:32:46.038623 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-17 03:32:46.038636 | orchestrator | Tuesday 17 February 2026 03:32:36 +0000 (0:00:02.611) 0:01:38.307 ****** 2026-02-17 03:32:46.038648 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:32:46.038661 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:32:46.038673 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:32:46.038685 | orchestrator | 2026-02-17 03:32:46.038697 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-17 03:32:46.038710 | orchestrator | Tuesday 17 February 2026 03:32:39 +0000 (0:00:02.529) 0:01:40.837 ****** 2026-02-17 03:32:46.038723 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:32:46.038735 | orchestrator | 2026-02-17 03:32:46.038748 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-17 03:32:46.038761 | orchestrator | Tuesday 17 February 2026 03:32:39 +0000 (0:00:00.142) 0:01:40.979 ****** 2026-02-17 03:32:46.038774 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:46.038788 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:46.038801 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:46.038814 | orchestrator | 2026-02-17 03:32:46.038827 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-17 03:32:46.038839 | orchestrator | Tuesday 17 February 2026 03:32:40 +0000 (0:00:01.017) 0:01:41.996 ****** 2026-02-17 03:32:46.038852 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:46.038881 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:46.038901 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:32:46.038920 | orchestrator | 2026-02-17 03:32:46.038938 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-17 03:32:46.038956 | orchestrator | Tuesday 17 February 2026 03:32:40 +0000 (0:00:00.598) 0:01:42.595 ****** 2026-02-17 03:32:46.038974 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:46.038993 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:46.039012 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:46.039028 | orchestrator | 2026-02-17 03:32:46.039048 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-17 03:32:46.039085 | orchestrator | Tuesday 17 February 2026 03:32:41 +0000 (0:00:00.770) 0:01:43.366 ****** 2026-02-17 03:32:46.039104 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:32:46.039123 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:32:46.039142 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:32:46.039159 | orchestrator | 2026-02-17 03:32:46.039177 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-17 03:32:46.039194 | orchestrator | Tuesday 17 February 2026 03:32:42 +0000 (0:00:00.618) 0:01:43.984 ****** 2026-02-17 03:32:46.039213 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:46.039347 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:46.039390 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:46.039410 | orchestrator | 2026-02-17 03:32:46.039426 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-17 03:32:46.039442 | orchestrator | Tuesday 17 February 2026 03:32:43 +0000 (0:00:01.260) 0:01:45.245 ****** 2026-02-17 03:32:46.039459 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:46.039476 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:46.039492 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:46.039508 | orchestrator | 2026-02-17 03:32:46.039525 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-17 03:32:46.039541 | orchestrator | Tuesday 17 February 2026 03:32:44 +0000 (0:00:00.759) 0:01:46.005 ****** 2026-02-17 03:32:46.039558 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:32:46.039575 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:32:46.039591 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:32:46.039607 | orchestrator | 2026-02-17 03:32:46.039624 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-17 03:32:46.039641 | orchestrator | Tuesday 17 February 2026 03:32:44 +0000 (0:00:00.334) 0:01:46.339 ****** 2026-02-17 03:32:46.039659 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.039678 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.039694 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.039712 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.039742 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.039759 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.039776 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.039801 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:46.039832 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101443 | orchestrator | 2026-02-17 03:32:53.101541 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-17 03:32:53.101554 | orchestrator | Tuesday 17 February 2026 03:32:46 +0000 (0:00:01.361) 0:01:47.701 ****** 2026-02-17 03:32:53.101561 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101570 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101575 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101580 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101616 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101642 | orchestrator | 2026-02-17 03:32:53.101647 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-17 03:32:53.101651 | orchestrator | Tuesday 17 February 2026 03:32:49 +0000 (0:00:03.808) 0:01:51.509 ****** 2026-02-17 03:32:53.101670 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101680 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101685 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101707 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 03:32:53.101727 | orchestrator | 2026-02-17 03:32:53.101735 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 03:32:53.101742 | orchestrator | Tuesday 17 February 2026 03:32:52 +0000 (0:00:03.022) 0:01:54.532 ****** 2026-02-17 03:32:53.101749 | orchestrator | 2026-02-17 03:32:53.101756 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 03:32:53.101763 | orchestrator | Tuesday 17 February 2026 03:32:52 +0000 (0:00:00.073) 0:01:54.606 ****** 2026-02-17 03:32:53.101771 | orchestrator | 2026-02-17 03:32:53.101779 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 03:32:53.101787 | orchestrator | Tuesday 17 February 2026 03:32:53 +0000 (0:00:00.082) 0:01:54.688 ****** 2026-02-17 03:32:53.101796 | orchestrator | 2026-02-17 03:32:53.101805 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-17 03:33:18.200693 | orchestrator | Tuesday 17 February 2026 03:32:53 +0000 (0:00:00.067) 0:01:54.755 ****** 2026-02-17 03:33:18.200834 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:33:18.200863 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:33:18.200883 | orchestrator | 2026-02-17 03:33:18.200903 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-17 03:33:18.200921 | orchestrator | Tuesday 17 February 2026 03:32:59 +0000 (0:00:06.268) 0:02:01.024 ****** 2026-02-17 03:33:18.200940 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:33:18.200958 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:33:18.200978 | orchestrator | 2026-02-17 03:33:18.200996 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-17 03:33:18.201052 | orchestrator | Tuesday 17 February 2026 03:33:05 +0000 (0:00:06.317) 0:02:07.342 ****** 2026-02-17 03:33:18.201071 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:33:18.201091 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:33:18.201111 | orchestrator | 2026-02-17 03:33:18.201130 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-17 03:33:18.201148 | orchestrator | Tuesday 17 February 2026 03:33:11 +0000 (0:00:06.323) 0:02:13.665 ****** 2026-02-17 03:33:18.201167 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:33:18.201185 | orchestrator | 2026-02-17 03:33:18.201204 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-17 03:33:18.201224 | orchestrator | Tuesday 17 February 2026 03:33:12 +0000 (0:00:00.171) 0:02:13.837 ****** 2026-02-17 03:33:18.201334 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:33:18.201368 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:33:18.201390 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:33:18.201409 | orchestrator | 2026-02-17 03:33:18.201428 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-17 03:33:18.201448 | orchestrator | Tuesday 17 February 2026 03:33:13 +0000 (0:00:01.023) 0:02:14.860 ****** 2026-02-17 03:33:18.201468 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:33:18.201488 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:33:18.201507 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:33:18.201527 | orchestrator | 2026-02-17 03:33:18.201547 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-17 03:33:18.201567 | orchestrator | Tuesday 17 February 2026 03:33:13 +0000 (0:00:00.688) 0:02:15.549 ****** 2026-02-17 03:33:18.201588 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:33:18.201606 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:33:18.201624 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:33:18.201642 | orchestrator | 2026-02-17 03:33:18.201659 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-17 03:33:18.201678 | orchestrator | Tuesday 17 February 2026 03:33:14 +0000 (0:00:00.802) 0:02:16.351 ****** 2026-02-17 03:33:18.201695 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:33:18.201714 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:33:18.201732 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:33:18.201750 | orchestrator | 2026-02-17 03:33:18.201768 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-17 03:33:18.201787 | orchestrator | Tuesday 17 February 2026 03:33:15 +0000 (0:00:00.724) 0:02:17.075 ****** 2026-02-17 03:33:18.201804 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:33:18.201823 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:33:18.201842 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:33:18.201861 | orchestrator | 2026-02-17 03:33:18.201881 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-17 03:33:18.201901 | orchestrator | Tuesday 17 February 2026 03:33:16 +0000 (0:00:01.186) 0:02:18.261 ****** 2026-02-17 03:33:18.201918 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:33:18.201938 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:33:18.201959 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:33:18.201979 | orchestrator | 2026-02-17 03:33:18.201999 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:33:18.202101 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-17 03:33:18.202131 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-17 03:33:18.202152 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-17 03:33:18.202172 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:33:18.202214 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:33:18.202261 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:33:18.202283 | orchestrator | 2026-02-17 03:33:18.202301 | orchestrator | 2026-02-17 03:33:18.202359 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:33:18.203291 | orchestrator | Tuesday 17 February 2026 03:33:17 +0000 (0:00:00.928) 0:02:19.190 ****** 2026-02-17 03:33:18.203364 | orchestrator | =============================================================================== 2026-02-17 03:33:18.203382 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.35s 2026-02-17 03:33:18.203397 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.19s 2026-02-17 03:33:18.203411 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.92s 2026-02-17 03:33:18.203423 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.93s 2026-02-17 03:33:18.203437 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.85s 2026-02-17 03:33:18.203472 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.85s 2026-02-17 03:33:18.203480 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.81s 2026-02-17 03:33:18.203489 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.02s 2026-02-17 03:33:18.203498 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.43s 2026-02-17 03:33:18.203512 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.11s 2026-02-17 03:33:18.203525 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.63s 2026-02-17 03:33:18.203536 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.52s 2026-02-17 03:33:18.203549 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.48s 2026-02-17 03:33:18.203562 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2026-02-17 03:33:18.203574 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.37s 2026-02-17 03:33:18.203586 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2026-02-17 03:33:18.203598 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.26s 2026-02-17 03:33:18.203611 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.19s 2026-02-17 03:33:18.203623 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.18s 2026-02-17 03:33:18.203635 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.17s 2026-02-17 03:33:18.770713 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-17 03:33:18.770797 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-17 03:33:21.198451 | orchestrator | 2026-02-17 03:33:21 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-17 03:33:31.310356 | orchestrator | 2026-02-17 03:33:31 | INFO  | Task 2c093540-45d9-4da0-b109-f1e9922ce489 (wipe-partitions) was prepared for execution. 2026-02-17 03:33:31.310472 | orchestrator | 2026-02-17 03:33:31 | INFO  | It takes a moment until task 2c093540-45d9-4da0-b109-f1e9922ce489 (wipe-partitions) has been started and output is visible here. 2026-02-17 03:33:44.934492 | orchestrator | 2026-02-17 03:33:44.934597 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-17 03:33:44.934610 | orchestrator | 2026-02-17 03:33:44.934619 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-17 03:33:44.934627 | orchestrator | Tuesday 17 February 2026 03:33:35 +0000 (0:00:00.171) 0:00:00.171 ****** 2026-02-17 03:33:44.934669 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:33:44.934679 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:33:44.934687 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:33:44.934695 | orchestrator | 2026-02-17 03:33:44.934703 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-17 03:33:44.934712 | orchestrator | Tuesday 17 February 2026 03:33:36 +0000 (0:00:00.653) 0:00:00.825 ****** 2026-02-17 03:33:44.934720 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:33:44.934728 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:33:44.934736 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:33:44.934744 | orchestrator | 2026-02-17 03:33:44.934752 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-17 03:33:44.934760 | orchestrator | Tuesday 17 February 2026 03:33:37 +0000 (0:00:00.418) 0:00:01.244 ****** 2026-02-17 03:33:44.934768 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:33:44.934777 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:33:44.934785 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:33:44.934793 | orchestrator | 2026-02-17 03:33:44.934801 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-17 03:33:44.934809 | orchestrator | Tuesday 17 February 2026 03:33:37 +0000 (0:00:00.631) 0:00:01.875 ****** 2026-02-17 03:33:44.934817 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:33:44.934825 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:33:44.934833 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:33:44.934842 | orchestrator | 2026-02-17 03:33:44.934849 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-17 03:33:44.934857 | orchestrator | Tuesday 17 February 2026 03:33:37 +0000 (0:00:00.292) 0:00:02.168 ****** 2026-02-17 03:33:44.934865 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-17 03:33:44.934874 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-17 03:33:44.934882 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-17 03:33:44.934890 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-17 03:33:44.934898 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-17 03:33:44.934905 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-17 03:33:44.934928 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-17 03:33:44.934936 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-17 03:33:44.934944 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-17 03:33:44.934952 | orchestrator | 2026-02-17 03:33:44.934960 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-17 03:33:44.934968 | orchestrator | Tuesday 17 February 2026 03:33:39 +0000 (0:00:01.252) 0:00:03.421 ****** 2026-02-17 03:33:44.934976 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-17 03:33:44.934984 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-17 03:33:44.934992 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-17 03:33:44.935000 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-17 03:33:44.935008 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-17 03:33:44.935016 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-17 03:33:44.935024 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-17 03:33:44.935032 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-17 03:33:44.935040 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-17 03:33:44.935048 | orchestrator | 2026-02-17 03:33:44.935056 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-17 03:33:44.935064 | orchestrator | Tuesday 17 February 2026 03:33:40 +0000 (0:00:01.720) 0:00:05.142 ****** 2026-02-17 03:33:44.935071 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-17 03:33:44.935079 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-17 03:33:44.935087 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-17 03:33:44.935095 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-17 03:33:44.935109 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-17 03:33:44.935117 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-17 03:33:44.935125 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-17 03:33:44.935132 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-17 03:33:44.935140 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-17 03:33:44.935148 | orchestrator | 2026-02-17 03:33:44.935156 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-17 03:33:44.935164 | orchestrator | Tuesday 17 February 2026 03:33:43 +0000 (0:00:02.210) 0:00:07.352 ****** 2026-02-17 03:33:44.935172 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:33:44.935180 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:33:44.935188 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:33:44.935195 | orchestrator | 2026-02-17 03:33:44.935203 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-17 03:33:44.935211 | orchestrator | Tuesday 17 February 2026 03:33:43 +0000 (0:00:00.675) 0:00:08.028 ****** 2026-02-17 03:33:44.935219 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:33:44.935227 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:33:44.935235 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:33:44.935262 | orchestrator | 2026-02-17 03:33:44.935272 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:33:44.935281 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:33:44.935290 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:33:44.935314 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:33:44.935323 | orchestrator | 2026-02-17 03:33:44.935331 | orchestrator | 2026-02-17 03:33:44.935339 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:33:44.935347 | orchestrator | Tuesday 17 February 2026 03:33:44 +0000 (0:00:00.712) 0:00:08.741 ****** 2026-02-17 03:33:44.935355 | orchestrator | =============================================================================== 2026-02-17 03:33:44.935363 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.21s 2026-02-17 03:33:44.935371 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.72s 2026-02-17 03:33:44.935378 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2026-02-17 03:33:44.935386 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2026-02-17 03:33:44.935396 | orchestrator | Reload udev rules ------------------------------------------------------- 0.68s 2026-02-17 03:33:44.935409 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.65s 2026-02-17 03:33:44.935421 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2026-02-17 03:33:44.935431 | orchestrator | Remove all rook related logical devices --------------------------------- 0.42s 2026-02-17 03:33:44.935442 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-02-17 03:33:57.668847 | orchestrator | 2026-02-17 03:33:57 | INFO  | Task 7c1ba641-e59a-4749-b93b-5abc5e4f94c2 (facts) was prepared for execution. 2026-02-17 03:33:57.668944 | orchestrator | 2026-02-17 03:33:57 | INFO  | It takes a moment until task 7c1ba641-e59a-4749-b93b-5abc5e4f94c2 (facts) has been started and output is visible here. 2026-02-17 03:34:11.208920 | orchestrator | 2026-02-17 03:34:11.209009 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-17 03:34:11.209018 | orchestrator | 2026-02-17 03:34:11.209025 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-17 03:34:11.209050 | orchestrator | Tuesday 17 February 2026 03:34:02 +0000 (0:00:00.293) 0:00:00.293 ****** 2026-02-17 03:34:11.209056 | orchestrator | ok: [testbed-manager] 2026-02-17 03:34:11.209063 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:34:11.209069 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:34:11.209074 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:34:11.209079 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:34:11.209085 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:34:11.209090 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:34:11.209096 | orchestrator | 2026-02-17 03:34:11.209101 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-17 03:34:11.209107 | orchestrator | Tuesday 17 February 2026 03:34:03 +0000 (0:00:01.240) 0:00:01.534 ****** 2026-02-17 03:34:11.209113 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:34:11.209120 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:34:11.209125 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:34:11.209131 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:34:11.209136 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:11.209141 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:11.209147 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:34:11.209152 | orchestrator | 2026-02-17 03:34:11.209158 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-17 03:34:11.209163 | orchestrator | 2026-02-17 03:34:11.209169 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-17 03:34:11.209174 | orchestrator | Tuesday 17 February 2026 03:34:04 +0000 (0:00:01.390) 0:00:02.924 ****** 2026-02-17 03:34:11.209180 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:34:11.209185 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:34:11.209191 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:34:11.209196 | orchestrator | ok: [testbed-manager] 2026-02-17 03:34:11.209202 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:34:11.209207 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:34:11.209212 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:34:11.209218 | orchestrator | 2026-02-17 03:34:11.209223 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-17 03:34:11.209229 | orchestrator | 2026-02-17 03:34:11.209234 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-17 03:34:11.209239 | orchestrator | Tuesday 17 February 2026 03:34:10 +0000 (0:00:05.114) 0:00:08.038 ****** 2026-02-17 03:34:11.209245 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:34:11.209250 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:34:11.209305 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:34:11.209315 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:34:11.209323 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:11.209333 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:11.209339 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:34:11.209344 | orchestrator | 2026-02-17 03:34:11.209349 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:34:11.209355 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:34:11.209391 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:34:11.209397 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:34:11.209403 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:34:11.209408 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:34:11.209414 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:34:11.209427 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:34:11.209433 | orchestrator | 2026-02-17 03:34:11.209438 | orchestrator | 2026-02-17 03:34:11.209443 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:34:11.209449 | orchestrator | Tuesday 17 February 2026 03:34:10 +0000 (0:00:00.627) 0:00:08.666 ****** 2026-02-17 03:34:11.209454 | orchestrator | =============================================================================== 2026-02-17 03:34:11.209460 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.11s 2026-02-17 03:34:11.209465 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2026-02-17 03:34:11.209470 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-02-17 03:34:11.209476 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-02-17 03:34:13.941610 | orchestrator | 2026-02-17 03:34:13 | INFO  | Task 7549b373-28ac-4e38-b609-33482655d39b (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-17 03:34:13.941725 | orchestrator | 2026-02-17 03:34:13 | INFO  | It takes a moment until task 7549b373-28ac-4e38-b609-33482655d39b (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-17 03:34:27.993112 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-17 03:34:27.993252 | orchestrator | 2.16.14 2026-02-17 03:34:27.993337 | orchestrator | 2026-02-17 03:34:27.993351 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-17 03:34:27.993363 | orchestrator | 2026-02-17 03:34:27.993374 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-17 03:34:27.993386 | orchestrator | Tuesday 17 February 2026 03:34:19 +0000 (0:00:00.352) 0:00:00.352 ****** 2026-02-17 03:34:27.993398 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 03:34:27.993408 | orchestrator | 2026-02-17 03:34:27.993437 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-17 03:34:27.993449 | orchestrator | Tuesday 17 February 2026 03:34:19 +0000 (0:00:00.260) 0:00:00.612 ****** 2026-02-17 03:34:27.993460 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:34:27.993471 | orchestrator | 2026-02-17 03:34:27.993481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993491 | orchestrator | Tuesday 17 February 2026 03:34:19 +0000 (0:00:00.266) 0:00:00.879 ****** 2026-02-17 03:34:27.993501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-17 03:34:27.993512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-17 03:34:27.993523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-17 03:34:27.993534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-17 03:34:27.993544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-17 03:34:27.993554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-17 03:34:27.993565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-17 03:34:27.993576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-17 03:34:27.993586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-17 03:34:27.993596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-17 03:34:27.993606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-17 03:34:27.993616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-17 03:34:27.993654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-17 03:34:27.993666 | orchestrator | 2026-02-17 03:34:27.993676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993687 | orchestrator | Tuesday 17 February 2026 03:34:20 +0000 (0:00:00.585) 0:00:01.464 ****** 2026-02-17 03:34:27.993697 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.993709 | orchestrator | 2026-02-17 03:34:27.993719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993729 | orchestrator | Tuesday 17 February 2026 03:34:20 +0000 (0:00:00.229) 0:00:01.693 ****** 2026-02-17 03:34:27.993740 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.993751 | orchestrator | 2026-02-17 03:34:27.993761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993772 | orchestrator | Tuesday 17 February 2026 03:34:20 +0000 (0:00:00.224) 0:00:01.918 ****** 2026-02-17 03:34:27.993782 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.993793 | orchestrator | 2026-02-17 03:34:27.993804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993814 | orchestrator | Tuesday 17 February 2026 03:34:20 +0000 (0:00:00.225) 0:00:02.143 ****** 2026-02-17 03:34:27.993825 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.993835 | orchestrator | 2026-02-17 03:34:27.993845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993854 | orchestrator | Tuesday 17 February 2026 03:34:21 +0000 (0:00:00.219) 0:00:02.363 ****** 2026-02-17 03:34:27.993863 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.993872 | orchestrator | 2026-02-17 03:34:27.993881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993890 | orchestrator | Tuesday 17 February 2026 03:34:21 +0000 (0:00:00.257) 0:00:02.621 ****** 2026-02-17 03:34:27.993899 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.993909 | orchestrator | 2026-02-17 03:34:27.993918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993927 | orchestrator | Tuesday 17 February 2026 03:34:21 +0000 (0:00:00.239) 0:00:02.860 ****** 2026-02-17 03:34:27.993936 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.993944 | orchestrator | 2026-02-17 03:34:27.993953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993963 | orchestrator | Tuesday 17 February 2026 03:34:21 +0000 (0:00:00.318) 0:00:03.179 ****** 2026-02-17 03:34:27.993972 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.993981 | orchestrator | 2026-02-17 03:34:27.993989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.993998 | orchestrator | Tuesday 17 February 2026 03:34:22 +0000 (0:00:00.227) 0:00:03.406 ****** 2026-02-17 03:34:27.994007 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25) 2026-02-17 03:34:27.994071 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25) 2026-02-17 03:34:27.994082 | orchestrator | 2026-02-17 03:34:27.994091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.994123 | orchestrator | Tuesday 17 February 2026 03:34:22 +0000 (0:00:00.714) 0:00:04.121 ****** 2026-02-17 03:34:27.994133 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427) 2026-02-17 03:34:27.994143 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427) 2026-02-17 03:34:27.994153 | orchestrator | 2026-02-17 03:34:27.994162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.994172 | orchestrator | Tuesday 17 February 2026 03:34:23 +0000 (0:00:00.708) 0:00:04.829 ****** 2026-02-17 03:34:27.994188 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350) 2026-02-17 03:34:27.994207 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350) 2026-02-17 03:34:27.994217 | orchestrator | 2026-02-17 03:34:27.994226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.994236 | orchestrator | Tuesday 17 February 2026 03:34:24 +0000 (0:00:01.001) 0:00:05.831 ****** 2026-02-17 03:34:27.994245 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3) 2026-02-17 03:34:27.994256 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3) 2026-02-17 03:34:27.994286 | orchestrator | 2026-02-17 03:34:27.994295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:27.994303 | orchestrator | Tuesday 17 February 2026 03:34:25 +0000 (0:00:00.491) 0:00:06.323 ****** 2026-02-17 03:34:27.994332 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-17 03:34:27.994342 | orchestrator | 2026-02-17 03:34:27.994350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:27.994360 | orchestrator | Tuesday 17 February 2026 03:34:25 +0000 (0:00:00.376) 0:00:06.700 ****** 2026-02-17 03:34:27.994369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-17 03:34:27.994379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-17 03:34:27.994388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-17 03:34:27.994398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-17 03:34:27.994407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-17 03:34:27.994417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-17 03:34:27.994426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-17 03:34:27.994436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-17 03:34:27.994446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-17 03:34:27.994456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-17 03:34:27.994466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-17 03:34:27.994476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-17 03:34:27.994486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-17 03:34:27.994495 | orchestrator | 2026-02-17 03:34:27.994504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:27.994515 | orchestrator | Tuesday 17 February 2026 03:34:25 +0000 (0:00:00.421) 0:00:07.121 ****** 2026-02-17 03:34:27.994524 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.994535 | orchestrator | 2026-02-17 03:34:27.994544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:27.994553 | orchestrator | Tuesday 17 February 2026 03:34:26 +0000 (0:00:00.208) 0:00:07.330 ****** 2026-02-17 03:34:27.994563 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.994573 | orchestrator | 2026-02-17 03:34:27.994583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:27.994594 | orchestrator | Tuesday 17 February 2026 03:34:26 +0000 (0:00:00.227) 0:00:07.557 ****** 2026-02-17 03:34:27.994605 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.994615 | orchestrator | 2026-02-17 03:34:27.994626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:27.994637 | orchestrator | Tuesday 17 February 2026 03:34:26 +0000 (0:00:00.213) 0:00:07.771 ****** 2026-02-17 03:34:27.994656 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.994667 | orchestrator | 2026-02-17 03:34:27.994678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:27.994688 | orchestrator | Tuesday 17 February 2026 03:34:26 +0000 (0:00:00.248) 0:00:08.019 ****** 2026-02-17 03:34:27.994699 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.994709 | orchestrator | 2026-02-17 03:34:27.994720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:27.994730 | orchestrator | Tuesday 17 February 2026 03:34:27 +0000 (0:00:00.228) 0:00:08.248 ****** 2026-02-17 03:34:27.994740 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.994750 | orchestrator | 2026-02-17 03:34:27.994761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:27.994771 | orchestrator | Tuesday 17 February 2026 03:34:27 +0000 (0:00:00.725) 0:00:08.973 ****** 2026-02-17 03:34:27.994781 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:27.994791 | orchestrator | 2026-02-17 03:34:27.994810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:35.955502 | orchestrator | Tuesday 17 February 2026 03:34:27 +0000 (0:00:00.227) 0:00:09.201 ****** 2026-02-17 03:34:35.955617 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.955634 | orchestrator | 2026-02-17 03:34:35.955647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:35.955659 | orchestrator | Tuesday 17 February 2026 03:34:28 +0000 (0:00:00.219) 0:00:09.420 ****** 2026-02-17 03:34:35.955670 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-17 03:34:35.955681 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-17 03:34:35.955709 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-17 03:34:35.955720 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-17 03:34:35.955731 | orchestrator | 2026-02-17 03:34:35.955742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:35.955753 | orchestrator | Tuesday 17 February 2026 03:34:28 +0000 (0:00:00.744) 0:00:10.165 ****** 2026-02-17 03:34:35.955764 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.955775 | orchestrator | 2026-02-17 03:34:35.955786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:35.955797 | orchestrator | Tuesday 17 February 2026 03:34:29 +0000 (0:00:00.211) 0:00:10.377 ****** 2026-02-17 03:34:35.955808 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.955818 | orchestrator | 2026-02-17 03:34:35.955829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:35.955840 | orchestrator | Tuesday 17 February 2026 03:34:29 +0000 (0:00:00.238) 0:00:10.615 ****** 2026-02-17 03:34:35.955851 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.955861 | orchestrator | 2026-02-17 03:34:35.955872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:35.955883 | orchestrator | Tuesday 17 February 2026 03:34:29 +0000 (0:00:00.230) 0:00:10.846 ****** 2026-02-17 03:34:35.955894 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.955905 | orchestrator | 2026-02-17 03:34:35.955915 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-17 03:34:35.955926 | orchestrator | Tuesday 17 February 2026 03:34:29 +0000 (0:00:00.245) 0:00:11.091 ****** 2026-02-17 03:34:35.955937 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-17 03:34:35.955948 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-17 03:34:35.955958 | orchestrator | 2026-02-17 03:34:35.955970 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-17 03:34:35.955981 | orchestrator | Tuesday 17 February 2026 03:34:30 +0000 (0:00:00.215) 0:00:11.306 ****** 2026-02-17 03:34:35.955991 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956002 | orchestrator | 2026-02-17 03:34:35.956015 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-17 03:34:35.956027 | orchestrator | Tuesday 17 February 2026 03:34:30 +0000 (0:00:00.148) 0:00:11.454 ****** 2026-02-17 03:34:35.956064 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956077 | orchestrator | 2026-02-17 03:34:35.956090 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-17 03:34:35.956103 | orchestrator | Tuesday 17 February 2026 03:34:30 +0000 (0:00:00.150) 0:00:11.605 ****** 2026-02-17 03:34:35.956115 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956127 | orchestrator | 2026-02-17 03:34:35.956139 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-17 03:34:35.956152 | orchestrator | Tuesday 17 February 2026 03:34:30 +0000 (0:00:00.385) 0:00:11.991 ****** 2026-02-17 03:34:35.956163 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:34:35.956176 | orchestrator | 2026-02-17 03:34:35.956188 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-17 03:34:35.956200 | orchestrator | Tuesday 17 February 2026 03:34:30 +0000 (0:00:00.154) 0:00:12.146 ****** 2026-02-17 03:34:35.956213 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '366ad200-d272-50e2-9bbd-3174591b235f'}}) 2026-02-17 03:34:35.956225 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}}) 2026-02-17 03:34:35.956237 | orchestrator | 2026-02-17 03:34:35.956249 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-17 03:34:35.956261 | orchestrator | Tuesday 17 February 2026 03:34:31 +0000 (0:00:00.183) 0:00:12.329 ****** 2026-02-17 03:34:35.956311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '366ad200-d272-50e2-9bbd-3174591b235f'}})  2026-02-17 03:34:35.956325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}})  2026-02-17 03:34:35.956337 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956349 | orchestrator | 2026-02-17 03:34:35.956362 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-17 03:34:35.956373 | orchestrator | Tuesday 17 February 2026 03:34:31 +0000 (0:00:00.181) 0:00:12.510 ****** 2026-02-17 03:34:35.956384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '366ad200-d272-50e2-9bbd-3174591b235f'}})  2026-02-17 03:34:35.956395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}})  2026-02-17 03:34:35.956406 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956417 | orchestrator | 2026-02-17 03:34:35.956427 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-17 03:34:35.956438 | orchestrator | Tuesday 17 February 2026 03:34:31 +0000 (0:00:00.167) 0:00:12.678 ****** 2026-02-17 03:34:35.956449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '366ad200-d272-50e2-9bbd-3174591b235f'}})  2026-02-17 03:34:35.956478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}})  2026-02-17 03:34:35.956490 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956501 | orchestrator | 2026-02-17 03:34:35.956512 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-17 03:34:35.956523 | orchestrator | Tuesday 17 February 2026 03:34:31 +0000 (0:00:00.167) 0:00:12.845 ****** 2026-02-17 03:34:35.956534 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:34:35.956545 | orchestrator | 2026-02-17 03:34:35.956556 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-17 03:34:35.956572 | orchestrator | Tuesday 17 February 2026 03:34:31 +0000 (0:00:00.165) 0:00:13.011 ****** 2026-02-17 03:34:35.956583 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:34:35.956594 | orchestrator | 2026-02-17 03:34:35.956605 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-17 03:34:35.956616 | orchestrator | Tuesday 17 February 2026 03:34:31 +0000 (0:00:00.156) 0:00:13.168 ****** 2026-02-17 03:34:35.956636 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956647 | orchestrator | 2026-02-17 03:34:35.956658 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-17 03:34:35.956669 | orchestrator | Tuesday 17 February 2026 03:34:32 +0000 (0:00:00.156) 0:00:13.325 ****** 2026-02-17 03:34:35.956679 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956690 | orchestrator | 2026-02-17 03:34:35.956701 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-17 03:34:35.956711 | orchestrator | Tuesday 17 February 2026 03:34:32 +0000 (0:00:00.146) 0:00:13.472 ****** 2026-02-17 03:34:35.956722 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956733 | orchestrator | 2026-02-17 03:34:35.956743 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-17 03:34:35.956754 | orchestrator | Tuesday 17 February 2026 03:34:32 +0000 (0:00:00.142) 0:00:13.614 ****** 2026-02-17 03:34:35.956765 | orchestrator | ok: [testbed-node-3] => { 2026-02-17 03:34:35.956776 | orchestrator |  "ceph_osd_devices": { 2026-02-17 03:34:35.956787 | orchestrator |  "sdb": { 2026-02-17 03:34:35.956798 | orchestrator |  "osd_lvm_uuid": "366ad200-d272-50e2-9bbd-3174591b235f" 2026-02-17 03:34:35.956809 | orchestrator |  }, 2026-02-17 03:34:35.956819 | orchestrator |  "sdc": { 2026-02-17 03:34:35.956830 | orchestrator |  "osd_lvm_uuid": "c478ad6b-fe8a-5fdf-805d-21e03f23f5d3" 2026-02-17 03:34:35.956841 | orchestrator |  } 2026-02-17 03:34:35.956851 | orchestrator |  } 2026-02-17 03:34:35.956862 | orchestrator | } 2026-02-17 03:34:35.956873 | orchestrator | 2026-02-17 03:34:35.956884 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-17 03:34:35.956895 | orchestrator | Tuesday 17 February 2026 03:34:32 +0000 (0:00:00.382) 0:00:13.997 ****** 2026-02-17 03:34:35.956906 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956917 | orchestrator | 2026-02-17 03:34:35.956928 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-17 03:34:35.956939 | orchestrator | Tuesday 17 February 2026 03:34:32 +0000 (0:00:00.165) 0:00:14.163 ****** 2026-02-17 03:34:35.956949 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.956960 | orchestrator | 2026-02-17 03:34:35.956971 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-17 03:34:35.956981 | orchestrator | Tuesday 17 February 2026 03:34:33 +0000 (0:00:00.147) 0:00:14.310 ****** 2026-02-17 03:34:35.956992 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:34:35.957003 | orchestrator | 2026-02-17 03:34:35.957013 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-17 03:34:35.957024 | orchestrator | Tuesday 17 February 2026 03:34:33 +0000 (0:00:00.146) 0:00:14.457 ****** 2026-02-17 03:34:35.957034 | orchestrator | changed: [testbed-node-3] => { 2026-02-17 03:34:35.957045 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-17 03:34:35.957082 | orchestrator |  "ceph_osd_devices": { 2026-02-17 03:34:35.957094 | orchestrator |  "sdb": { 2026-02-17 03:34:35.957104 | orchestrator |  "osd_lvm_uuid": "366ad200-d272-50e2-9bbd-3174591b235f" 2026-02-17 03:34:35.957115 | orchestrator |  }, 2026-02-17 03:34:35.957126 | orchestrator |  "sdc": { 2026-02-17 03:34:35.957137 | orchestrator |  "osd_lvm_uuid": "c478ad6b-fe8a-5fdf-805d-21e03f23f5d3" 2026-02-17 03:34:35.957148 | orchestrator |  } 2026-02-17 03:34:35.957159 | orchestrator |  }, 2026-02-17 03:34:35.957170 | orchestrator |  "lvm_volumes": [ 2026-02-17 03:34:35.957180 | orchestrator |  { 2026-02-17 03:34:35.957191 | orchestrator |  "data": "osd-block-366ad200-d272-50e2-9bbd-3174591b235f", 2026-02-17 03:34:35.957202 | orchestrator |  "data_vg": "ceph-366ad200-d272-50e2-9bbd-3174591b235f" 2026-02-17 03:34:35.957213 | orchestrator |  }, 2026-02-17 03:34:35.957223 | orchestrator |  { 2026-02-17 03:34:35.957234 | orchestrator |  "data": "osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3", 2026-02-17 03:34:35.957254 | orchestrator |  "data_vg": "ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3" 2026-02-17 03:34:35.957294 | orchestrator |  } 2026-02-17 03:34:35.957306 | orchestrator |  ] 2026-02-17 03:34:35.957317 | orchestrator |  } 2026-02-17 03:34:35.957327 | orchestrator | } 2026-02-17 03:34:35.957338 | orchestrator | 2026-02-17 03:34:35.957349 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-17 03:34:35.957360 | orchestrator | Tuesday 17 February 2026 03:34:33 +0000 (0:00:00.237) 0:00:14.695 ****** 2026-02-17 03:34:35.957371 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 03:34:35.957382 | orchestrator | 2026-02-17 03:34:35.957392 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-17 03:34:35.957403 | orchestrator | 2026-02-17 03:34:35.957414 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-17 03:34:35.957425 | orchestrator | Tuesday 17 February 2026 03:34:35 +0000 (0:00:01.906) 0:00:16.601 ****** 2026-02-17 03:34:35.957435 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-17 03:34:35.957446 | orchestrator | 2026-02-17 03:34:35.957457 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-17 03:34:35.957468 | orchestrator | Tuesday 17 February 2026 03:34:35 +0000 (0:00:00.348) 0:00:16.950 ****** 2026-02-17 03:34:35.957479 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:34:35.957490 | orchestrator | 2026-02-17 03:34:35.957508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.424252 | orchestrator | Tuesday 17 February 2026 03:34:35 +0000 (0:00:00.220) 0:00:17.171 ****** 2026-02-17 03:34:45.424454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-17 03:34:45.424481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-17 03:34:45.424494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-17 03:34:45.424523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-17 03:34:45.424535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-17 03:34:45.424546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-17 03:34:45.424557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-17 03:34:45.424568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-17 03:34:45.424579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-17 03:34:45.424590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-17 03:34:45.424601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-17 03:34:45.424612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-17 03:34:45.424623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-17 03:34:45.424634 | orchestrator | 2026-02-17 03:34:45.424645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.424656 | orchestrator | Tuesday 17 February 2026 03:34:36 +0000 (0:00:00.647) 0:00:17.818 ****** 2026-02-17 03:34:45.424667 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.424680 | orchestrator | 2026-02-17 03:34:45.424691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.424702 | orchestrator | Tuesday 17 February 2026 03:34:36 +0000 (0:00:00.238) 0:00:18.057 ****** 2026-02-17 03:34:45.424713 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.424724 | orchestrator | 2026-02-17 03:34:45.424735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.424746 | orchestrator | Tuesday 17 February 2026 03:34:37 +0000 (0:00:00.234) 0:00:18.291 ****** 2026-02-17 03:34:45.424778 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.424793 | orchestrator | 2026-02-17 03:34:45.424805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.424817 | orchestrator | Tuesday 17 February 2026 03:34:37 +0000 (0:00:00.231) 0:00:18.523 ****** 2026-02-17 03:34:45.424830 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.424843 | orchestrator | 2026-02-17 03:34:45.424855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.424867 | orchestrator | Tuesday 17 February 2026 03:34:37 +0000 (0:00:00.213) 0:00:18.736 ****** 2026-02-17 03:34:45.424879 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.424892 | orchestrator | 2026-02-17 03:34:45.424904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.424916 | orchestrator | Tuesday 17 February 2026 03:34:37 +0000 (0:00:00.225) 0:00:18.961 ****** 2026-02-17 03:34:45.424929 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.424942 | orchestrator | 2026-02-17 03:34:45.424953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.424963 | orchestrator | Tuesday 17 February 2026 03:34:37 +0000 (0:00:00.229) 0:00:19.191 ****** 2026-02-17 03:34:45.424974 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.424985 | orchestrator | 2026-02-17 03:34:45.424996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.425007 | orchestrator | Tuesday 17 February 2026 03:34:38 +0000 (0:00:00.222) 0:00:19.413 ****** 2026-02-17 03:34:45.425018 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425029 | orchestrator | 2026-02-17 03:34:45.425040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.425051 | orchestrator | Tuesday 17 February 2026 03:34:38 +0000 (0:00:00.217) 0:00:19.631 ****** 2026-02-17 03:34:45.425062 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15) 2026-02-17 03:34:45.425074 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15) 2026-02-17 03:34:45.425085 | orchestrator | 2026-02-17 03:34:45.425096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.425107 | orchestrator | Tuesday 17 February 2026 03:34:39 +0000 (0:00:00.675) 0:00:20.306 ****** 2026-02-17 03:34:45.425118 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856) 2026-02-17 03:34:45.425130 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856) 2026-02-17 03:34:45.425141 | orchestrator | 2026-02-17 03:34:45.425152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.425162 | orchestrator | Tuesday 17 February 2026 03:34:39 +0000 (0:00:00.773) 0:00:21.079 ****** 2026-02-17 03:34:45.425173 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67) 2026-02-17 03:34:45.425184 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67) 2026-02-17 03:34:45.425196 | orchestrator | 2026-02-17 03:34:45.425207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.425237 | orchestrator | Tuesday 17 February 2026 03:34:40 +0000 (0:00:01.007) 0:00:22.087 ****** 2026-02-17 03:34:45.425249 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416) 2026-02-17 03:34:45.425260 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416) 2026-02-17 03:34:45.425352 | orchestrator | 2026-02-17 03:34:45.425373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:45.425391 | orchestrator | Tuesday 17 February 2026 03:34:41 +0000 (0:00:00.480) 0:00:22.568 ****** 2026-02-17 03:34:45.425403 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-17 03:34:45.425425 | orchestrator | 2026-02-17 03:34:45.425436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425447 | orchestrator | Tuesday 17 February 2026 03:34:41 +0000 (0:00:00.418) 0:00:22.986 ****** 2026-02-17 03:34:45.425458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-17 03:34:45.425469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-17 03:34:45.425480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-17 03:34:45.425491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-17 03:34:45.425501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-17 03:34:45.425512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-17 03:34:45.425523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-17 03:34:45.425534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-17 03:34:45.425544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-17 03:34:45.425555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-17 03:34:45.425567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-17 03:34:45.425578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-17 03:34:45.425588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-17 03:34:45.425600 | orchestrator | 2026-02-17 03:34:45.425611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425622 | orchestrator | Tuesday 17 February 2026 03:34:42 +0000 (0:00:00.392) 0:00:23.379 ****** 2026-02-17 03:34:45.425632 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425644 | orchestrator | 2026-02-17 03:34:45.425654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425665 | orchestrator | Tuesday 17 February 2026 03:34:42 +0000 (0:00:00.206) 0:00:23.585 ****** 2026-02-17 03:34:45.425675 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425685 | orchestrator | 2026-02-17 03:34:45.425695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425704 | orchestrator | Tuesday 17 February 2026 03:34:42 +0000 (0:00:00.213) 0:00:23.799 ****** 2026-02-17 03:34:45.425714 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425724 | orchestrator | 2026-02-17 03:34:45.425733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425743 | orchestrator | Tuesday 17 February 2026 03:34:42 +0000 (0:00:00.223) 0:00:24.023 ****** 2026-02-17 03:34:45.425753 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425762 | orchestrator | 2026-02-17 03:34:45.425772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425782 | orchestrator | Tuesday 17 February 2026 03:34:43 +0000 (0:00:00.227) 0:00:24.250 ****** 2026-02-17 03:34:45.425791 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425801 | orchestrator | 2026-02-17 03:34:45.425811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425821 | orchestrator | Tuesday 17 February 2026 03:34:43 +0000 (0:00:00.242) 0:00:24.492 ****** 2026-02-17 03:34:45.425830 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425840 | orchestrator | 2026-02-17 03:34:45.425850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425860 | orchestrator | Tuesday 17 February 2026 03:34:43 +0000 (0:00:00.216) 0:00:24.709 ****** 2026-02-17 03:34:45.425875 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425885 | orchestrator | 2026-02-17 03:34:45.425895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425904 | orchestrator | Tuesday 17 February 2026 03:34:43 +0000 (0:00:00.246) 0:00:24.956 ****** 2026-02-17 03:34:45.425914 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:45.425924 | orchestrator | 2026-02-17 03:34:45.425933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.425943 | orchestrator | Tuesday 17 February 2026 03:34:44 +0000 (0:00:00.740) 0:00:25.697 ****** 2026-02-17 03:34:45.425953 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-17 03:34:45.425963 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-17 03:34:45.425973 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-17 03:34:45.425983 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-17 03:34:45.425993 | orchestrator | 2026-02-17 03:34:45.426003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:45.426013 | orchestrator | Tuesday 17 February 2026 03:34:45 +0000 (0:00:00.724) 0:00:26.421 ****** 2026-02-17 03:34:45.426078 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873194 | orchestrator | 2026-02-17 03:34:51.873331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:51.873345 | orchestrator | Tuesday 17 February 2026 03:34:45 +0000 (0:00:00.221) 0:00:26.643 ****** 2026-02-17 03:34:51.873353 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873362 | orchestrator | 2026-02-17 03:34:51.873370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:51.873378 | orchestrator | Tuesday 17 February 2026 03:34:45 +0000 (0:00:00.233) 0:00:26.876 ****** 2026-02-17 03:34:51.873411 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873419 | orchestrator | 2026-02-17 03:34:51.873426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:34:51.873434 | orchestrator | Tuesday 17 February 2026 03:34:45 +0000 (0:00:00.229) 0:00:27.106 ****** 2026-02-17 03:34:51.873441 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873449 | orchestrator | 2026-02-17 03:34:51.873456 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-17 03:34:51.873463 | orchestrator | Tuesday 17 February 2026 03:34:46 +0000 (0:00:00.249) 0:00:27.356 ****** 2026-02-17 03:34:51.873471 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-17 03:34:51.873479 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-17 03:34:51.873486 | orchestrator | 2026-02-17 03:34:51.873493 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-17 03:34:51.873501 | orchestrator | Tuesday 17 February 2026 03:34:46 +0000 (0:00:00.191) 0:00:27.547 ****** 2026-02-17 03:34:51.873508 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873515 | orchestrator | 2026-02-17 03:34:51.873523 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-17 03:34:51.873530 | orchestrator | Tuesday 17 February 2026 03:34:46 +0000 (0:00:00.152) 0:00:27.700 ****** 2026-02-17 03:34:51.873537 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873544 | orchestrator | 2026-02-17 03:34:51.873552 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-17 03:34:51.873559 | orchestrator | Tuesday 17 February 2026 03:34:46 +0000 (0:00:00.156) 0:00:27.857 ****** 2026-02-17 03:34:51.873566 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873573 | orchestrator | 2026-02-17 03:34:51.873581 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-17 03:34:51.873588 | orchestrator | Tuesday 17 February 2026 03:34:46 +0000 (0:00:00.160) 0:00:28.018 ****** 2026-02-17 03:34:51.873595 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:34:51.873603 | orchestrator | 2026-02-17 03:34:51.873611 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-17 03:34:51.873618 | orchestrator | Tuesday 17 February 2026 03:34:46 +0000 (0:00:00.132) 0:00:28.150 ****** 2026-02-17 03:34:51.873642 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}}) 2026-02-17 03:34:51.873651 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8aff4da6-f81a-563d-a807-caa30e1cb6b0'}}) 2026-02-17 03:34:51.873659 | orchestrator | 2026-02-17 03:34:51.873666 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-17 03:34:51.873674 | orchestrator | Tuesday 17 February 2026 03:34:47 +0000 (0:00:00.174) 0:00:28.324 ****** 2026-02-17 03:34:51.873682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}})  2026-02-17 03:34:51.873691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8aff4da6-f81a-563d-a807-caa30e1cb6b0'}})  2026-02-17 03:34:51.873698 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873706 | orchestrator | 2026-02-17 03:34:51.873713 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-17 03:34:51.873720 | orchestrator | Tuesday 17 February 2026 03:34:47 +0000 (0:00:00.387) 0:00:28.712 ****** 2026-02-17 03:34:51.873727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}})  2026-02-17 03:34:51.873737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8aff4da6-f81a-563d-a807-caa30e1cb6b0'}})  2026-02-17 03:34:51.873745 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873754 | orchestrator | 2026-02-17 03:34:51.873762 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-17 03:34:51.873770 | orchestrator | Tuesday 17 February 2026 03:34:47 +0000 (0:00:00.215) 0:00:28.927 ****** 2026-02-17 03:34:51.873778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}})  2026-02-17 03:34:51.873787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8aff4da6-f81a-563d-a807-caa30e1cb6b0'}})  2026-02-17 03:34:51.873795 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873803 | orchestrator | 2026-02-17 03:34:51.873811 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-17 03:34:51.873819 | orchestrator | Tuesday 17 February 2026 03:34:47 +0000 (0:00:00.191) 0:00:29.119 ****** 2026-02-17 03:34:51.873827 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:34:51.873835 | orchestrator | 2026-02-17 03:34:51.873843 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-17 03:34:51.873851 | orchestrator | Tuesday 17 February 2026 03:34:48 +0000 (0:00:00.157) 0:00:29.277 ****** 2026-02-17 03:34:51.873860 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:34:51.873868 | orchestrator | 2026-02-17 03:34:51.873876 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-17 03:34:51.873884 | orchestrator | Tuesday 17 February 2026 03:34:48 +0000 (0:00:00.153) 0:00:29.430 ****** 2026-02-17 03:34:51.873906 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873914 | orchestrator | 2026-02-17 03:34:51.873922 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-17 03:34:51.873931 | orchestrator | Tuesday 17 February 2026 03:34:48 +0000 (0:00:00.144) 0:00:29.574 ****** 2026-02-17 03:34:51.873939 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873947 | orchestrator | 2026-02-17 03:34:51.873956 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-17 03:34:51.873964 | orchestrator | Tuesday 17 February 2026 03:34:48 +0000 (0:00:00.147) 0:00:29.722 ****** 2026-02-17 03:34:51.873976 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.873984 | orchestrator | 2026-02-17 03:34:51.873992 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-17 03:34:51.874001 | orchestrator | Tuesday 17 February 2026 03:34:48 +0000 (0:00:00.150) 0:00:29.872 ****** 2026-02-17 03:34:51.874057 | orchestrator | ok: [testbed-node-4] => { 2026-02-17 03:34:51.874068 | orchestrator |  "ceph_osd_devices": { 2026-02-17 03:34:51.874077 | orchestrator |  "sdb": { 2026-02-17 03:34:51.874086 | orchestrator |  "osd_lvm_uuid": "33b7cf65-698e-5092-b1e1-7b58bfaeaf8b" 2026-02-17 03:34:51.874094 | orchestrator |  }, 2026-02-17 03:34:51.874103 | orchestrator |  "sdc": { 2026-02-17 03:34:51.874111 | orchestrator |  "osd_lvm_uuid": "8aff4da6-f81a-563d-a807-caa30e1cb6b0" 2026-02-17 03:34:51.874119 | orchestrator |  } 2026-02-17 03:34:51.874126 | orchestrator |  } 2026-02-17 03:34:51.874133 | orchestrator | } 2026-02-17 03:34:51.874141 | orchestrator | 2026-02-17 03:34:51.874148 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-17 03:34:51.874156 | orchestrator | Tuesday 17 February 2026 03:34:48 +0000 (0:00:00.152) 0:00:30.025 ****** 2026-02-17 03:34:51.874163 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.874170 | orchestrator | 2026-02-17 03:34:51.874177 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-17 03:34:51.874185 | orchestrator | Tuesday 17 February 2026 03:34:48 +0000 (0:00:00.122) 0:00:30.147 ****** 2026-02-17 03:34:51.874192 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.874199 | orchestrator | 2026-02-17 03:34:51.874209 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-17 03:34:51.874221 | orchestrator | Tuesday 17 February 2026 03:34:49 +0000 (0:00:00.157) 0:00:30.305 ****** 2026-02-17 03:34:51.874239 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:34:51.874250 | orchestrator | 2026-02-17 03:34:51.874262 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-17 03:34:51.874305 | orchestrator | Tuesday 17 February 2026 03:34:49 +0000 (0:00:00.150) 0:00:30.455 ****** 2026-02-17 03:34:51.874328 | orchestrator | changed: [testbed-node-4] => { 2026-02-17 03:34:51.874339 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-17 03:34:51.874350 | orchestrator |  "ceph_osd_devices": { 2026-02-17 03:34:51.874362 | orchestrator |  "sdb": { 2026-02-17 03:34:51.874374 | orchestrator |  "osd_lvm_uuid": "33b7cf65-698e-5092-b1e1-7b58bfaeaf8b" 2026-02-17 03:34:51.874385 | orchestrator |  }, 2026-02-17 03:34:51.874397 | orchestrator |  "sdc": { 2026-02-17 03:34:51.874410 | orchestrator |  "osd_lvm_uuid": "8aff4da6-f81a-563d-a807-caa30e1cb6b0" 2026-02-17 03:34:51.874422 | orchestrator |  } 2026-02-17 03:34:51.874434 | orchestrator |  }, 2026-02-17 03:34:51.874447 | orchestrator |  "lvm_volumes": [ 2026-02-17 03:34:51.874454 | orchestrator |  { 2026-02-17 03:34:51.874462 | orchestrator |  "data": "osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b", 2026-02-17 03:34:51.874469 | orchestrator |  "data_vg": "ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b" 2026-02-17 03:34:51.874476 | orchestrator |  }, 2026-02-17 03:34:51.874483 | orchestrator |  { 2026-02-17 03:34:51.874490 | orchestrator |  "data": "osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0", 2026-02-17 03:34:51.874497 | orchestrator |  "data_vg": "ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0" 2026-02-17 03:34:51.874505 | orchestrator |  } 2026-02-17 03:34:51.874512 | orchestrator |  ] 2026-02-17 03:34:51.874519 | orchestrator |  } 2026-02-17 03:34:51.874526 | orchestrator | } 2026-02-17 03:34:51.874534 | orchestrator | 2026-02-17 03:34:51.874541 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-17 03:34:51.874548 | orchestrator | Tuesday 17 February 2026 03:34:49 +0000 (0:00:00.445) 0:00:30.900 ****** 2026-02-17 03:34:51.874555 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-17 03:34:51.874563 | orchestrator | 2026-02-17 03:34:51.874570 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-17 03:34:51.874577 | orchestrator | 2026-02-17 03:34:51.874584 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-17 03:34:51.874599 | orchestrator | Tuesday 17 February 2026 03:34:50 +0000 (0:00:01.234) 0:00:32.135 ****** 2026-02-17 03:34:51.874606 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-17 03:34:51.874614 | orchestrator | 2026-02-17 03:34:51.874621 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-17 03:34:51.874628 | orchestrator | Tuesday 17 February 2026 03:34:51 +0000 (0:00:00.268) 0:00:32.403 ****** 2026-02-17 03:34:51.874635 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:34:51.874642 | orchestrator | 2026-02-17 03:34:51.874650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:34:51.874657 | orchestrator | Tuesday 17 February 2026 03:34:51 +0000 (0:00:00.260) 0:00:32.664 ****** 2026-02-17 03:34:51.874664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-17 03:34:51.874671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-17 03:34:51.874679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-17 03:34:51.874686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-17 03:34:51.874693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-17 03:34:51.874708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-17 03:35:01.407270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-17 03:35:01.407498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-17 03:35:01.407525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-17 03:35:01.407565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-17 03:35:01.407586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-17 03:35:01.407605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-17 03:35:01.407624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-17 03:35:01.407644 | orchestrator | 2026-02-17 03:35:01.407665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.407685 | orchestrator | Tuesday 17 February 2026 03:34:51 +0000 (0:00:00.419) 0:00:33.084 ****** 2026-02-17 03:35:01.407705 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.407726 | orchestrator | 2026-02-17 03:35:01.407744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.407763 | orchestrator | Tuesday 17 February 2026 03:34:52 +0000 (0:00:00.209) 0:00:33.293 ****** 2026-02-17 03:35:01.407782 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.407802 | orchestrator | 2026-02-17 03:35:01.407820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.407840 | orchestrator | Tuesday 17 February 2026 03:34:52 +0000 (0:00:00.245) 0:00:33.538 ****** 2026-02-17 03:35:01.407859 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.407878 | orchestrator | 2026-02-17 03:35:01.407897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.407917 | orchestrator | Tuesday 17 February 2026 03:34:52 +0000 (0:00:00.199) 0:00:33.738 ****** 2026-02-17 03:35:01.407936 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.407954 | orchestrator | 2026-02-17 03:35:01.407973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.407992 | orchestrator | Tuesday 17 February 2026 03:34:53 +0000 (0:00:00.685) 0:00:34.424 ****** 2026-02-17 03:35:01.408011 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.408030 | orchestrator | 2026-02-17 03:35:01.408048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.408066 | orchestrator | Tuesday 17 February 2026 03:34:53 +0000 (0:00:00.225) 0:00:34.649 ****** 2026-02-17 03:35:01.408118 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.408138 | orchestrator | 2026-02-17 03:35:01.408156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.408174 | orchestrator | Tuesday 17 February 2026 03:34:53 +0000 (0:00:00.210) 0:00:34.860 ****** 2026-02-17 03:35:01.408193 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.408210 | orchestrator | 2026-02-17 03:35:01.408228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.408245 | orchestrator | Tuesday 17 February 2026 03:34:53 +0000 (0:00:00.223) 0:00:35.084 ****** 2026-02-17 03:35:01.408263 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.408314 | orchestrator | 2026-02-17 03:35:01.408335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.408357 | orchestrator | Tuesday 17 February 2026 03:34:54 +0000 (0:00:00.220) 0:00:35.304 ****** 2026-02-17 03:35:01.408378 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944) 2026-02-17 03:35:01.408399 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944) 2026-02-17 03:35:01.408419 | orchestrator | 2026-02-17 03:35:01.408439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.408458 | orchestrator | Tuesday 17 February 2026 03:34:54 +0000 (0:00:00.451) 0:00:35.756 ****** 2026-02-17 03:35:01.408479 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86) 2026-02-17 03:35:01.408500 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86) 2026-02-17 03:35:01.408520 | orchestrator | 2026-02-17 03:35:01.408540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.408561 | orchestrator | Tuesday 17 February 2026 03:34:55 +0000 (0:00:00.493) 0:00:36.250 ****** 2026-02-17 03:35:01.408582 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d) 2026-02-17 03:35:01.408601 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d) 2026-02-17 03:35:01.408613 | orchestrator | 2026-02-17 03:35:01.408624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.408635 | orchestrator | Tuesday 17 February 2026 03:34:55 +0000 (0:00:00.485) 0:00:36.735 ****** 2026-02-17 03:35:01.408647 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc) 2026-02-17 03:35:01.408658 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc) 2026-02-17 03:35:01.408669 | orchestrator | 2026-02-17 03:35:01.408680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:35:01.408691 | orchestrator | Tuesday 17 February 2026 03:34:56 +0000 (0:00:00.510) 0:00:37.245 ****** 2026-02-17 03:35:01.408701 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-17 03:35:01.408712 | orchestrator | 2026-02-17 03:35:01.408723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.408756 | orchestrator | Tuesday 17 February 2026 03:34:56 +0000 (0:00:00.376) 0:00:37.622 ****** 2026-02-17 03:35:01.408768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-17 03:35:01.408779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-17 03:35:01.408791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-17 03:35:01.408809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-17 03:35:01.408820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-17 03:35:01.408831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-17 03:35:01.408853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-17 03:35:01.408864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-17 03:35:01.408875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-17 03:35:01.408885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-17 03:35:01.408896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-17 03:35:01.408907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-17 03:35:01.408918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-17 03:35:01.408928 | orchestrator | 2026-02-17 03:35:01.408939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.408950 | orchestrator | Tuesday 17 February 2026 03:34:57 +0000 (0:00:00.683) 0:00:38.306 ****** 2026-02-17 03:35:01.408960 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.408971 | orchestrator | 2026-02-17 03:35:01.408982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.408993 | orchestrator | Tuesday 17 February 2026 03:34:57 +0000 (0:00:00.230) 0:00:38.536 ****** 2026-02-17 03:35:01.409004 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409015 | orchestrator | 2026-02-17 03:35:01.409025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409036 | orchestrator | Tuesday 17 February 2026 03:34:57 +0000 (0:00:00.214) 0:00:38.751 ****** 2026-02-17 03:35:01.409047 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409058 | orchestrator | 2026-02-17 03:35:01.409068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409079 | orchestrator | Tuesday 17 February 2026 03:34:57 +0000 (0:00:00.210) 0:00:38.961 ****** 2026-02-17 03:35:01.409090 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409101 | orchestrator | 2026-02-17 03:35:01.409112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409123 | orchestrator | Tuesday 17 February 2026 03:34:57 +0000 (0:00:00.231) 0:00:39.193 ****** 2026-02-17 03:35:01.409133 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409144 | orchestrator | 2026-02-17 03:35:01.409155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409166 | orchestrator | Tuesday 17 February 2026 03:34:58 +0000 (0:00:00.242) 0:00:39.435 ****** 2026-02-17 03:35:01.409176 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409187 | orchestrator | 2026-02-17 03:35:01.409198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409209 | orchestrator | Tuesday 17 February 2026 03:34:58 +0000 (0:00:00.235) 0:00:39.670 ****** 2026-02-17 03:35:01.409220 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409233 | orchestrator | 2026-02-17 03:35:01.409251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409268 | orchestrator | Tuesday 17 February 2026 03:34:58 +0000 (0:00:00.202) 0:00:39.872 ****** 2026-02-17 03:35:01.409312 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409330 | orchestrator | 2026-02-17 03:35:01.409348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409364 | orchestrator | Tuesday 17 February 2026 03:34:58 +0000 (0:00:00.223) 0:00:40.096 ****** 2026-02-17 03:35:01.409382 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-17 03:35:01.409399 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-17 03:35:01.409419 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-17 03:35:01.409434 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-17 03:35:01.409446 | orchestrator | 2026-02-17 03:35:01.409465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409476 | orchestrator | Tuesday 17 February 2026 03:34:59 +0000 (0:00:00.963) 0:00:41.060 ****** 2026-02-17 03:35:01.409487 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409498 | orchestrator | 2026-02-17 03:35:01.409509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409520 | orchestrator | Tuesday 17 February 2026 03:35:00 +0000 (0:00:00.281) 0:00:41.341 ****** 2026-02-17 03:35:01.409531 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409542 | orchestrator | 2026-02-17 03:35:01.409553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409564 | orchestrator | Tuesday 17 February 2026 03:35:00 +0000 (0:00:00.242) 0:00:41.584 ****** 2026-02-17 03:35:01.409574 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409585 | orchestrator | 2026-02-17 03:35:01.409596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:35:01.409607 | orchestrator | Tuesday 17 February 2026 03:35:01 +0000 (0:00:00.758) 0:00:42.342 ****** 2026-02-17 03:35:01.409618 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:01.409629 | orchestrator | 2026-02-17 03:35:01.409649 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-17 03:35:05.993323 | orchestrator | Tuesday 17 February 2026 03:35:01 +0000 (0:00:00.280) 0:00:42.622 ****** 2026-02-17 03:35:05.993422 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-17 03:35:05.993432 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-17 03:35:05.993438 | orchestrator | 2026-02-17 03:35:05.993445 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-17 03:35:05.993467 | orchestrator | Tuesday 17 February 2026 03:35:01 +0000 (0:00:00.192) 0:00:42.814 ****** 2026-02-17 03:35:05.993474 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993480 | orchestrator | 2026-02-17 03:35:05.993486 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-17 03:35:05.993492 | orchestrator | Tuesday 17 February 2026 03:35:01 +0000 (0:00:00.153) 0:00:42.968 ****** 2026-02-17 03:35:05.993498 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993505 | orchestrator | 2026-02-17 03:35:05.993511 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-17 03:35:05.993516 | orchestrator | Tuesday 17 February 2026 03:35:01 +0000 (0:00:00.168) 0:00:43.136 ****** 2026-02-17 03:35:05.993522 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993528 | orchestrator | 2026-02-17 03:35:05.993534 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-17 03:35:05.993540 | orchestrator | Tuesday 17 February 2026 03:35:02 +0000 (0:00:00.166) 0:00:43.302 ****** 2026-02-17 03:35:05.993546 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:35:05.993552 | orchestrator | 2026-02-17 03:35:05.993558 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-17 03:35:05.993564 | orchestrator | Tuesday 17 February 2026 03:35:02 +0000 (0:00:00.202) 0:00:43.505 ****** 2026-02-17 03:35:05.993570 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '415e7a1a-a305-5338-824f-e9750ca5ebee'}}) 2026-02-17 03:35:05.993577 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67fd3cab-24d5-5329-b459-0f3a5a04c841'}}) 2026-02-17 03:35:05.993583 | orchestrator | 2026-02-17 03:35:05.993589 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-17 03:35:05.993595 | orchestrator | Tuesday 17 February 2026 03:35:02 +0000 (0:00:00.183) 0:00:43.688 ****** 2026-02-17 03:35:05.993601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '415e7a1a-a305-5338-824f-e9750ca5ebee'}})  2026-02-17 03:35:05.993609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67fd3cab-24d5-5329-b459-0f3a5a04c841'}})  2026-02-17 03:35:05.993615 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993679 | orchestrator | 2026-02-17 03:35:05.993686 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-17 03:35:05.993692 | orchestrator | Tuesday 17 February 2026 03:35:02 +0000 (0:00:00.156) 0:00:43.845 ****** 2026-02-17 03:35:05.993698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '415e7a1a-a305-5338-824f-e9750ca5ebee'}})  2026-02-17 03:35:05.993704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67fd3cab-24d5-5329-b459-0f3a5a04c841'}})  2026-02-17 03:35:05.993710 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993716 | orchestrator | 2026-02-17 03:35:05.993722 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-17 03:35:05.993728 | orchestrator | Tuesday 17 February 2026 03:35:02 +0000 (0:00:00.186) 0:00:44.031 ****** 2026-02-17 03:35:05.993734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '415e7a1a-a305-5338-824f-e9750ca5ebee'}})  2026-02-17 03:35:05.993740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67fd3cab-24d5-5329-b459-0f3a5a04c841'}})  2026-02-17 03:35:05.993746 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993752 | orchestrator | 2026-02-17 03:35:05.993758 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-17 03:35:05.993764 | orchestrator | Tuesday 17 February 2026 03:35:02 +0000 (0:00:00.166) 0:00:44.198 ****** 2026-02-17 03:35:05.993770 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:35:05.993775 | orchestrator | 2026-02-17 03:35:05.993781 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-17 03:35:05.993787 | orchestrator | Tuesday 17 February 2026 03:35:03 +0000 (0:00:00.153) 0:00:44.351 ****** 2026-02-17 03:35:05.993793 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:35:05.993799 | orchestrator | 2026-02-17 03:35:05.993805 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-17 03:35:05.993811 | orchestrator | Tuesday 17 February 2026 03:35:03 +0000 (0:00:00.407) 0:00:44.759 ****** 2026-02-17 03:35:05.993817 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993823 | orchestrator | 2026-02-17 03:35:05.993828 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-17 03:35:05.993836 | orchestrator | Tuesday 17 February 2026 03:35:03 +0000 (0:00:00.153) 0:00:44.912 ****** 2026-02-17 03:35:05.993843 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993850 | orchestrator | 2026-02-17 03:35:05.993857 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-17 03:35:05.993863 | orchestrator | Tuesday 17 February 2026 03:35:03 +0000 (0:00:00.159) 0:00:45.072 ****** 2026-02-17 03:35:05.993871 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.993878 | orchestrator | 2026-02-17 03:35:05.993884 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-17 03:35:05.993891 | orchestrator | Tuesday 17 February 2026 03:35:04 +0000 (0:00:00.157) 0:00:45.229 ****** 2026-02-17 03:35:05.993898 | orchestrator | ok: [testbed-node-5] => { 2026-02-17 03:35:05.993905 | orchestrator |  "ceph_osd_devices": { 2026-02-17 03:35:05.993912 | orchestrator |  "sdb": { 2026-02-17 03:35:05.993934 | orchestrator |  "osd_lvm_uuid": "415e7a1a-a305-5338-824f-e9750ca5ebee" 2026-02-17 03:35:05.993942 | orchestrator |  }, 2026-02-17 03:35:05.993949 | orchestrator |  "sdc": { 2026-02-17 03:35:05.993955 | orchestrator |  "osd_lvm_uuid": "67fd3cab-24d5-5329-b459-0f3a5a04c841" 2026-02-17 03:35:05.993962 | orchestrator |  } 2026-02-17 03:35:05.993969 | orchestrator |  } 2026-02-17 03:35:05.993976 | orchestrator | } 2026-02-17 03:35:05.993983 | orchestrator | 2026-02-17 03:35:05.993995 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-17 03:35:05.994002 | orchestrator | Tuesday 17 February 2026 03:35:04 +0000 (0:00:00.169) 0:00:45.399 ****** 2026-02-17 03:35:05.994009 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.994063 | orchestrator | 2026-02-17 03:35:05.994070 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-17 03:35:05.994077 | orchestrator | Tuesday 17 February 2026 03:35:04 +0000 (0:00:00.158) 0:00:45.558 ****** 2026-02-17 03:35:05.994083 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.994091 | orchestrator | 2026-02-17 03:35:05.994098 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-17 03:35:05.994105 | orchestrator | Tuesday 17 February 2026 03:35:04 +0000 (0:00:00.142) 0:00:45.700 ****** 2026-02-17 03:35:05.994112 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:35:05.994118 | orchestrator | 2026-02-17 03:35:05.994124 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-17 03:35:05.994130 | orchestrator | Tuesday 17 February 2026 03:35:04 +0000 (0:00:00.148) 0:00:45.849 ****** 2026-02-17 03:35:05.994136 | orchestrator | changed: [testbed-node-5] => { 2026-02-17 03:35:05.994142 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-17 03:35:05.994148 | orchestrator |  "ceph_osd_devices": { 2026-02-17 03:35:05.994154 | orchestrator |  "sdb": { 2026-02-17 03:35:05.994160 | orchestrator |  "osd_lvm_uuid": "415e7a1a-a305-5338-824f-e9750ca5ebee" 2026-02-17 03:35:05.994174 | orchestrator |  }, 2026-02-17 03:35:05.994180 | orchestrator |  "sdc": { 2026-02-17 03:35:05.994186 | orchestrator |  "osd_lvm_uuid": "67fd3cab-24d5-5329-b459-0f3a5a04c841" 2026-02-17 03:35:05.994192 | orchestrator |  } 2026-02-17 03:35:05.994198 | orchestrator |  }, 2026-02-17 03:35:05.994204 | orchestrator |  "lvm_volumes": [ 2026-02-17 03:35:05.994210 | orchestrator |  { 2026-02-17 03:35:05.994216 | orchestrator |  "data": "osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee", 2026-02-17 03:35:05.994222 | orchestrator |  "data_vg": "ceph-415e7a1a-a305-5338-824f-e9750ca5ebee" 2026-02-17 03:35:05.994227 | orchestrator |  }, 2026-02-17 03:35:05.994233 | orchestrator |  { 2026-02-17 03:35:05.994240 | orchestrator |  "data": "osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841", 2026-02-17 03:35:05.994246 | orchestrator |  "data_vg": "ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841" 2026-02-17 03:35:05.994251 | orchestrator |  } 2026-02-17 03:35:05.994257 | orchestrator |  ] 2026-02-17 03:35:05.994263 | orchestrator |  } 2026-02-17 03:35:05.994269 | orchestrator | } 2026-02-17 03:35:05.994275 | orchestrator | 2026-02-17 03:35:05.994298 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-17 03:35:05.994304 | orchestrator | Tuesday 17 February 2026 03:35:04 +0000 (0:00:00.238) 0:00:46.088 ****** 2026-02-17 03:35:05.994309 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-17 03:35:05.994315 | orchestrator | 2026-02-17 03:35:05.994321 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:35:05.994327 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-17 03:35:05.994334 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-17 03:35:05.994340 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-17 03:35:05.994346 | orchestrator | 2026-02-17 03:35:05.994352 | orchestrator | 2026-02-17 03:35:05.994358 | orchestrator | 2026-02-17 03:35:05.994363 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:35:05.994369 | orchestrator | Tuesday 17 February 2026 03:35:05 +0000 (0:00:01.098) 0:00:47.187 ****** 2026-02-17 03:35:05.994375 | orchestrator | =============================================================================== 2026-02-17 03:35:05.994381 | orchestrator | Write configuration file ------------------------------------------------ 4.24s 2026-02-17 03:35:05.994392 | orchestrator | Add known links to the list of available block devices ------------------ 1.65s 2026-02-17 03:35:05.994398 | orchestrator | Add known partitions to the list of available block devices ------------- 1.50s 2026-02-17 03:35:05.994404 | orchestrator | Add known links to the list of available block devices ------------------ 1.01s 2026-02-17 03:35:05.994409 | orchestrator | Add known links to the list of available block devices ------------------ 1.00s 2026-02-17 03:35:05.994415 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-02-17 03:35:05.994421 | orchestrator | Print configuration data ------------------------------------------------ 0.92s 2026-02-17 03:35:05.994427 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.88s 2026-02-17 03:35:05.994432 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-02-17 03:35:05.994438 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-02-17 03:35:05.994444 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-02-17 03:35:05.994450 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-02-17 03:35:05.994455 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-02-17 03:35:05.994466 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-02-17 03:35:06.454329 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.73s 2026-02-17 03:35:06.454460 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-17 03:35:06.454484 | orchestrator | Set OSD devices config data --------------------------------------------- 0.72s 2026-02-17 03:35:06.454579 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-02-17 03:35:06.454602 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.71s 2026-02-17 03:35:06.454613 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-02-17 03:35:29.186334 | orchestrator | 2026-02-17 03:35:29 | INFO  | Task 84f266c3-d854-470a-89f6-07db310f3aa5 (sync inventory) is running in background. Output coming soon. 2026-02-17 03:35:59.555511 | orchestrator | 2026-02-17 03:35:30 | INFO  | Starting group_vars file reorganization 2026-02-17 03:35:59.555664 | orchestrator | 2026-02-17 03:35:30 | INFO  | Moved 0 file(s) to their respective directories 2026-02-17 03:35:59.555686 | orchestrator | 2026-02-17 03:35:30 | INFO  | Group_vars file reorganization completed 2026-02-17 03:35:59.555699 | orchestrator | 2026-02-17 03:35:33 | INFO  | Starting variable preparation from inventory 2026-02-17 03:35:59.555711 | orchestrator | 2026-02-17 03:35:37 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-17 03:35:59.555723 | orchestrator | 2026-02-17 03:35:37 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-17 03:35:59.555734 | orchestrator | 2026-02-17 03:35:37 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-17 03:35:59.555745 | orchestrator | 2026-02-17 03:35:37 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-17 03:35:59.555756 | orchestrator | 2026-02-17 03:35:37 | INFO  | Variable preparation completed 2026-02-17 03:35:59.555767 | orchestrator | 2026-02-17 03:35:39 | INFO  | Starting inventory overwrite handling 2026-02-17 03:35:59.555778 | orchestrator | 2026-02-17 03:35:39 | INFO  | Handling group overwrites in 99-overwrite 2026-02-17 03:35:59.555789 | orchestrator | 2026-02-17 03:35:39 | INFO  | Removing group frr:children from 60-generic 2026-02-17 03:35:59.555800 | orchestrator | 2026-02-17 03:35:39 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-17 03:35:59.555811 | orchestrator | 2026-02-17 03:35:39 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-17 03:35:59.555853 | orchestrator | 2026-02-17 03:35:39 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-17 03:35:59.555865 | orchestrator | 2026-02-17 03:35:39 | INFO  | Handling group overwrites in 20-roles 2026-02-17 03:35:59.555876 | orchestrator | 2026-02-17 03:35:39 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-17 03:35:59.555887 | orchestrator | 2026-02-17 03:35:39 | INFO  | Removed 5 group(s) in total 2026-02-17 03:35:59.555898 | orchestrator | 2026-02-17 03:35:39 | INFO  | Inventory overwrite handling completed 2026-02-17 03:35:59.555909 | orchestrator | 2026-02-17 03:35:40 | INFO  | Starting merge of inventory files 2026-02-17 03:35:59.555920 | orchestrator | 2026-02-17 03:35:40 | INFO  | Inventory files merged successfully 2026-02-17 03:35:59.555930 | orchestrator | 2026-02-17 03:35:46 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-17 03:35:59.555941 | orchestrator | 2026-02-17 03:35:58 | INFO  | Successfully wrote ClusterShell configuration 2026-02-17 03:35:59.555953 | orchestrator | [master 8d57f3b] 2026-02-17-03-35 2026-02-17 03:35:59.555967 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-17 03:36:02.082982 | orchestrator | 2026-02-17 03:36:02 | INFO  | Task f2917225-58f6-44e4-b93a-1754eed6dd13 (ceph-create-lvm-devices) was prepared for execution. 2026-02-17 03:36:02.083099 | orchestrator | 2026-02-17 03:36:02 | INFO  | It takes a moment until task f2917225-58f6-44e4-b93a-1754eed6dd13 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-17 03:36:15.666678 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-17 03:36:15.666806 | orchestrator | 2.16.14 2026-02-17 03:36:15.666823 | orchestrator | 2026-02-17 03:36:15.666833 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-17 03:36:15.666843 | orchestrator | 2026-02-17 03:36:15.666852 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-17 03:36:15.666860 | orchestrator | Tuesday 17 February 2026 03:36:06 +0000 (0:00:00.333) 0:00:00.333 ****** 2026-02-17 03:36:15.666869 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 03:36:15.666878 | orchestrator | 2026-02-17 03:36:15.666886 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-17 03:36:15.666894 | orchestrator | Tuesday 17 February 2026 03:36:07 +0000 (0:00:00.273) 0:00:00.607 ****** 2026-02-17 03:36:15.666902 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:15.666911 | orchestrator | 2026-02-17 03:36:15.666919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.666927 | orchestrator | Tuesday 17 February 2026 03:36:07 +0000 (0:00:00.258) 0:00:00.866 ****** 2026-02-17 03:36:15.666935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-17 03:36:15.666958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-17 03:36:15.666967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-17 03:36:15.666975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-17 03:36:15.666983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-17 03:36:15.666991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-17 03:36:15.666999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-17 03:36:15.667007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-17 03:36:15.667015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-17 03:36:15.667023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-17 03:36:15.667102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-17 03:36:15.667117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-17 03:36:15.667130 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-17 03:36:15.667143 | orchestrator | 2026-02-17 03:36:15.667154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667165 | orchestrator | Tuesday 17 February 2026 03:36:07 +0000 (0:00:00.587) 0:00:01.453 ****** 2026-02-17 03:36:15.667177 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667189 | orchestrator | 2026-02-17 03:36:15.667203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667217 | orchestrator | Tuesday 17 February 2026 03:36:08 +0000 (0:00:00.234) 0:00:01.687 ****** 2026-02-17 03:36:15.667231 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667243 | orchestrator | 2026-02-17 03:36:15.667258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667270 | orchestrator | Tuesday 17 February 2026 03:36:08 +0000 (0:00:00.235) 0:00:01.923 ****** 2026-02-17 03:36:15.667279 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667288 | orchestrator | 2026-02-17 03:36:15.667297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667306 | orchestrator | Tuesday 17 February 2026 03:36:08 +0000 (0:00:00.269) 0:00:02.193 ****** 2026-02-17 03:36:15.667337 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667347 | orchestrator | 2026-02-17 03:36:15.667356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667365 | orchestrator | Tuesday 17 February 2026 03:36:08 +0000 (0:00:00.224) 0:00:02.417 ****** 2026-02-17 03:36:15.667374 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667383 | orchestrator | 2026-02-17 03:36:15.667392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667402 | orchestrator | Tuesday 17 February 2026 03:36:09 +0000 (0:00:00.229) 0:00:02.647 ****** 2026-02-17 03:36:15.667411 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667420 | orchestrator | 2026-02-17 03:36:15.667429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667438 | orchestrator | Tuesday 17 February 2026 03:36:09 +0000 (0:00:00.240) 0:00:02.888 ****** 2026-02-17 03:36:15.667447 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667455 | orchestrator | 2026-02-17 03:36:15.667465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667474 | orchestrator | Tuesday 17 February 2026 03:36:09 +0000 (0:00:00.219) 0:00:03.107 ****** 2026-02-17 03:36:15.667482 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667491 | orchestrator | 2026-02-17 03:36:15.667500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667509 | orchestrator | Tuesday 17 February 2026 03:36:09 +0000 (0:00:00.247) 0:00:03.355 ****** 2026-02-17 03:36:15.667518 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25) 2026-02-17 03:36:15.667529 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25) 2026-02-17 03:36:15.667538 | orchestrator | 2026-02-17 03:36:15.667547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667573 | orchestrator | Tuesday 17 February 2026 03:36:10 +0000 (0:00:00.723) 0:00:04.078 ****** 2026-02-17 03:36:15.667582 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427) 2026-02-17 03:36:15.667592 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427) 2026-02-17 03:36:15.667600 | orchestrator | 2026-02-17 03:36:15.667608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667625 | orchestrator | Tuesday 17 February 2026 03:36:11 +0000 (0:00:00.738) 0:00:04.816 ****** 2026-02-17 03:36:15.667633 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350) 2026-02-17 03:36:15.667642 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350) 2026-02-17 03:36:15.667649 | orchestrator | 2026-02-17 03:36:15.667657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667665 | orchestrator | Tuesday 17 February 2026 03:36:12 +0000 (0:00:01.007) 0:00:05.824 ****** 2026-02-17 03:36:15.667673 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3) 2026-02-17 03:36:15.667687 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3) 2026-02-17 03:36:15.667698 | orchestrator | 2026-02-17 03:36:15.667711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:15.667725 | orchestrator | Tuesday 17 February 2026 03:36:12 +0000 (0:00:00.523) 0:00:06.348 ****** 2026-02-17 03:36:15.667737 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-17 03:36:15.667752 | orchestrator | 2026-02-17 03:36:15.667766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:15.667778 | orchestrator | Tuesday 17 February 2026 03:36:13 +0000 (0:00:00.407) 0:00:06.756 ****** 2026-02-17 03:36:15.667792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-17 03:36:15.667802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-17 03:36:15.667810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-17 03:36:15.667817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-17 03:36:15.667825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-17 03:36:15.667833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-17 03:36:15.667841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-17 03:36:15.667849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-17 03:36:15.667857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-17 03:36:15.667865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-17 03:36:15.667873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-17 03:36:15.667881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-17 03:36:15.667889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-17 03:36:15.667897 | orchestrator | 2026-02-17 03:36:15.667905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:15.667913 | orchestrator | Tuesday 17 February 2026 03:36:13 +0000 (0:00:00.445) 0:00:07.201 ****** 2026-02-17 03:36:15.667921 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667929 | orchestrator | 2026-02-17 03:36:15.667937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:15.667945 | orchestrator | Tuesday 17 February 2026 03:36:13 +0000 (0:00:00.219) 0:00:07.421 ****** 2026-02-17 03:36:15.667953 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667961 | orchestrator | 2026-02-17 03:36:15.667969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:15.667977 | orchestrator | Tuesday 17 February 2026 03:36:14 +0000 (0:00:00.243) 0:00:07.664 ****** 2026-02-17 03:36:15.667985 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.667999 | orchestrator | 2026-02-17 03:36:15.668007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:15.668015 | orchestrator | Tuesday 17 February 2026 03:36:14 +0000 (0:00:00.263) 0:00:07.928 ****** 2026-02-17 03:36:15.668022 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.668031 | orchestrator | 2026-02-17 03:36:15.668039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:15.668047 | orchestrator | Tuesday 17 February 2026 03:36:14 +0000 (0:00:00.222) 0:00:08.150 ****** 2026-02-17 03:36:15.668055 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.668063 | orchestrator | 2026-02-17 03:36:15.668071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:15.668079 | orchestrator | Tuesday 17 February 2026 03:36:14 +0000 (0:00:00.209) 0:00:08.360 ****** 2026-02-17 03:36:15.668087 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.668095 | orchestrator | 2026-02-17 03:36:15.668103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:15.668111 | orchestrator | Tuesday 17 February 2026 03:36:15 +0000 (0:00:00.655) 0:00:09.016 ****** 2026-02-17 03:36:15.668119 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:15.668127 | orchestrator | 2026-02-17 03:36:15.668140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:24.139967 | orchestrator | Tuesday 17 February 2026 03:36:15 +0000 (0:00:00.219) 0:00:09.236 ****** 2026-02-17 03:36:24.140079 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140099 | orchestrator | 2026-02-17 03:36:24.140112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:24.140124 | orchestrator | Tuesday 17 February 2026 03:36:15 +0000 (0:00:00.235) 0:00:09.471 ****** 2026-02-17 03:36:24.140136 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-17 03:36:24.140148 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-17 03:36:24.140160 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-17 03:36:24.140172 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-17 03:36:24.140183 | orchestrator | 2026-02-17 03:36:24.140195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:24.140206 | orchestrator | Tuesday 17 February 2026 03:36:16 +0000 (0:00:00.777) 0:00:10.249 ****** 2026-02-17 03:36:24.140217 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140228 | orchestrator | 2026-02-17 03:36:24.140239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:24.140250 | orchestrator | Tuesday 17 February 2026 03:36:16 +0000 (0:00:00.232) 0:00:10.481 ****** 2026-02-17 03:36:24.140261 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140272 | orchestrator | 2026-02-17 03:36:24.140354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:24.140369 | orchestrator | Tuesday 17 February 2026 03:36:17 +0000 (0:00:00.239) 0:00:10.721 ****** 2026-02-17 03:36:24.140380 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140392 | orchestrator | 2026-02-17 03:36:24.140402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:24.140413 | orchestrator | Tuesday 17 February 2026 03:36:17 +0000 (0:00:00.268) 0:00:10.989 ****** 2026-02-17 03:36:24.140424 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140435 | orchestrator | 2026-02-17 03:36:24.140446 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-17 03:36:24.140457 | orchestrator | Tuesday 17 February 2026 03:36:17 +0000 (0:00:00.242) 0:00:11.232 ****** 2026-02-17 03:36:24.140468 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140478 | orchestrator | 2026-02-17 03:36:24.140489 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-17 03:36:24.140502 | orchestrator | Tuesday 17 February 2026 03:36:17 +0000 (0:00:00.144) 0:00:11.377 ****** 2026-02-17 03:36:24.140516 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '366ad200-d272-50e2-9bbd-3174591b235f'}}) 2026-02-17 03:36:24.140550 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}}) 2026-02-17 03:36:24.140563 | orchestrator | 2026-02-17 03:36:24.140576 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-17 03:36:24.140590 | orchestrator | Tuesday 17 February 2026 03:36:17 +0000 (0:00:00.199) 0:00:11.576 ****** 2026-02-17 03:36:24.140604 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}) 2026-02-17 03:36:24.140617 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}) 2026-02-17 03:36:24.140630 | orchestrator | 2026-02-17 03:36:24.140642 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-17 03:36:24.140655 | orchestrator | Tuesday 17 February 2026 03:36:20 +0000 (0:00:02.035) 0:00:13.612 ****** 2026-02-17 03:36:24.140668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:24.140681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:24.140694 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140707 | orchestrator | 2026-02-17 03:36:24.140720 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-17 03:36:24.140732 | orchestrator | Tuesday 17 February 2026 03:36:20 +0000 (0:00:00.398) 0:00:14.011 ****** 2026-02-17 03:36:24.140745 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}) 2026-02-17 03:36:24.140757 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}) 2026-02-17 03:36:24.140769 | orchestrator | 2026-02-17 03:36:24.140780 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-17 03:36:24.140790 | orchestrator | Tuesday 17 February 2026 03:36:21 +0000 (0:00:01.488) 0:00:15.499 ****** 2026-02-17 03:36:24.140801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:24.140812 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:24.140823 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140833 | orchestrator | 2026-02-17 03:36:24.140844 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-17 03:36:24.140855 | orchestrator | Tuesday 17 February 2026 03:36:22 +0000 (0:00:00.188) 0:00:15.688 ****** 2026-02-17 03:36:24.140883 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140894 | orchestrator | 2026-02-17 03:36:24.140905 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-17 03:36:24.140916 | orchestrator | Tuesday 17 February 2026 03:36:22 +0000 (0:00:00.158) 0:00:15.847 ****** 2026-02-17 03:36:24.140927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:24.140939 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:24.140950 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.140961 | orchestrator | 2026-02-17 03:36:24.140971 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-17 03:36:24.140982 | orchestrator | Tuesday 17 February 2026 03:36:22 +0000 (0:00:00.160) 0:00:16.008 ****** 2026-02-17 03:36:24.141003 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.141014 | orchestrator | 2026-02-17 03:36:24.141025 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-17 03:36:24.141036 | orchestrator | Tuesday 17 February 2026 03:36:22 +0000 (0:00:00.154) 0:00:16.162 ****** 2026-02-17 03:36:24.141052 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:24.141064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:24.141075 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.141086 | orchestrator | 2026-02-17 03:36:24.141097 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-17 03:36:24.141108 | orchestrator | Tuesday 17 February 2026 03:36:22 +0000 (0:00:00.167) 0:00:16.329 ****** 2026-02-17 03:36:24.141119 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.141129 | orchestrator | 2026-02-17 03:36:24.141140 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-17 03:36:24.141151 | orchestrator | Tuesday 17 February 2026 03:36:22 +0000 (0:00:00.169) 0:00:16.499 ****** 2026-02-17 03:36:24.141163 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:24.141174 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:24.141185 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.141195 | orchestrator | 2026-02-17 03:36:24.141206 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-17 03:36:24.141217 | orchestrator | Tuesday 17 February 2026 03:36:23 +0000 (0:00:00.177) 0:00:16.677 ****** 2026-02-17 03:36:24.141228 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:24.141240 | orchestrator | 2026-02-17 03:36:24.141250 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-17 03:36:24.141261 | orchestrator | Tuesday 17 February 2026 03:36:23 +0000 (0:00:00.145) 0:00:16.823 ****** 2026-02-17 03:36:24.141272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:24.141283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:24.141294 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.141305 | orchestrator | 2026-02-17 03:36:24.141342 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-17 03:36:24.141355 | orchestrator | Tuesday 17 February 2026 03:36:23 +0000 (0:00:00.159) 0:00:16.983 ****** 2026-02-17 03:36:24.141366 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:24.141377 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:24.141388 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.141399 | orchestrator | 2026-02-17 03:36:24.141410 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-17 03:36:24.141421 | orchestrator | Tuesday 17 February 2026 03:36:23 +0000 (0:00:00.403) 0:00:17.387 ****** 2026-02-17 03:36:24.141432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:24.141443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:24.141461 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.141472 | orchestrator | 2026-02-17 03:36:24.141483 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-17 03:36:24.141494 | orchestrator | Tuesday 17 February 2026 03:36:23 +0000 (0:00:00.174) 0:00:17.562 ****** 2026-02-17 03:36:24.141505 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:24.141516 | orchestrator | 2026-02-17 03:36:24.141527 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-17 03:36:24.141545 | orchestrator | Tuesday 17 February 2026 03:36:24 +0000 (0:00:00.153) 0:00:17.715 ****** 2026-02-17 03:36:31.186236 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186348 | orchestrator | 2026-02-17 03:36:31.186360 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-17 03:36:31.186369 | orchestrator | Tuesday 17 February 2026 03:36:24 +0000 (0:00:00.158) 0:00:17.873 ****** 2026-02-17 03:36:31.186376 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186384 | orchestrator | 2026-02-17 03:36:31.186391 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-17 03:36:31.186398 | orchestrator | Tuesday 17 February 2026 03:36:24 +0000 (0:00:00.165) 0:00:18.039 ****** 2026-02-17 03:36:31.186406 | orchestrator | ok: [testbed-node-3] => { 2026-02-17 03:36:31.186413 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-17 03:36:31.186421 | orchestrator | } 2026-02-17 03:36:31.186428 | orchestrator | 2026-02-17 03:36:31.186435 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-17 03:36:31.186441 | orchestrator | Tuesday 17 February 2026 03:36:24 +0000 (0:00:00.156) 0:00:18.195 ****** 2026-02-17 03:36:31.186448 | orchestrator | ok: [testbed-node-3] => { 2026-02-17 03:36:31.186455 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-17 03:36:31.186462 | orchestrator | } 2026-02-17 03:36:31.186469 | orchestrator | 2026-02-17 03:36:31.186475 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-17 03:36:31.186494 | orchestrator | Tuesday 17 February 2026 03:36:24 +0000 (0:00:00.196) 0:00:18.391 ****** 2026-02-17 03:36:31.186501 | orchestrator | ok: [testbed-node-3] => { 2026-02-17 03:36:31.186508 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-17 03:36:31.186515 | orchestrator | } 2026-02-17 03:36:31.186522 | orchestrator | 2026-02-17 03:36:31.186529 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-17 03:36:31.186536 | orchestrator | Tuesday 17 February 2026 03:36:24 +0000 (0:00:00.160) 0:00:18.552 ****** 2026-02-17 03:36:31.186542 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:31.186549 | orchestrator | 2026-02-17 03:36:31.186556 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-17 03:36:31.186563 | orchestrator | Tuesday 17 February 2026 03:36:25 +0000 (0:00:00.689) 0:00:19.242 ****** 2026-02-17 03:36:31.186569 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:31.186576 | orchestrator | 2026-02-17 03:36:31.186583 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-17 03:36:31.186590 | orchestrator | Tuesday 17 February 2026 03:36:26 +0000 (0:00:00.544) 0:00:19.786 ****** 2026-02-17 03:36:31.186596 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:31.186603 | orchestrator | 2026-02-17 03:36:31.186610 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-17 03:36:31.186617 | orchestrator | Tuesday 17 February 2026 03:36:26 +0000 (0:00:00.515) 0:00:20.301 ****** 2026-02-17 03:36:31.186624 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:31.186630 | orchestrator | 2026-02-17 03:36:31.186637 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-17 03:36:31.186644 | orchestrator | Tuesday 17 February 2026 03:36:27 +0000 (0:00:00.397) 0:00:20.699 ****** 2026-02-17 03:36:31.186651 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186658 | orchestrator | 2026-02-17 03:36:31.186664 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-17 03:36:31.186686 | orchestrator | Tuesday 17 February 2026 03:36:27 +0000 (0:00:00.109) 0:00:20.809 ****** 2026-02-17 03:36:31.186693 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186700 | orchestrator | 2026-02-17 03:36:31.186707 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-17 03:36:31.186713 | orchestrator | Tuesday 17 February 2026 03:36:27 +0000 (0:00:00.138) 0:00:20.948 ****** 2026-02-17 03:36:31.186720 | orchestrator | ok: [testbed-node-3] => { 2026-02-17 03:36:31.186727 | orchestrator |  "vgs_report": { 2026-02-17 03:36:31.186734 | orchestrator |  "vg": [] 2026-02-17 03:36:31.186741 | orchestrator |  } 2026-02-17 03:36:31.186748 | orchestrator | } 2026-02-17 03:36:31.186755 | orchestrator | 2026-02-17 03:36:31.186761 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-17 03:36:31.186769 | orchestrator | Tuesday 17 February 2026 03:36:27 +0000 (0:00:00.162) 0:00:21.111 ****** 2026-02-17 03:36:31.186778 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186786 | orchestrator | 2026-02-17 03:36:31.186793 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-17 03:36:31.186801 | orchestrator | Tuesday 17 February 2026 03:36:27 +0000 (0:00:00.156) 0:00:21.268 ****** 2026-02-17 03:36:31.186809 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186816 | orchestrator | 2026-02-17 03:36:31.186824 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-17 03:36:31.186832 | orchestrator | Tuesday 17 February 2026 03:36:27 +0000 (0:00:00.152) 0:00:21.420 ****** 2026-02-17 03:36:31.186839 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186847 | orchestrator | 2026-02-17 03:36:31.186855 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-17 03:36:31.186863 | orchestrator | Tuesday 17 February 2026 03:36:27 +0000 (0:00:00.162) 0:00:21.583 ****** 2026-02-17 03:36:31.186870 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186878 | orchestrator | 2026-02-17 03:36:31.186885 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-17 03:36:31.186893 | orchestrator | Tuesday 17 February 2026 03:36:28 +0000 (0:00:00.164) 0:00:21.747 ****** 2026-02-17 03:36:31.186901 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186908 | orchestrator | 2026-02-17 03:36:31.186916 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-17 03:36:31.186924 | orchestrator | Tuesday 17 February 2026 03:36:28 +0000 (0:00:00.146) 0:00:21.894 ****** 2026-02-17 03:36:31.186931 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186939 | orchestrator | 2026-02-17 03:36:31.186946 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-17 03:36:31.186954 | orchestrator | Tuesday 17 February 2026 03:36:28 +0000 (0:00:00.155) 0:00:22.050 ****** 2026-02-17 03:36:31.186961 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.186969 | orchestrator | 2026-02-17 03:36:31.186976 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-17 03:36:31.186984 | orchestrator | Tuesday 17 February 2026 03:36:28 +0000 (0:00:00.144) 0:00:22.195 ****** 2026-02-17 03:36:31.187005 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187013 | orchestrator | 2026-02-17 03:36:31.187021 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-17 03:36:31.187028 | orchestrator | Tuesday 17 February 2026 03:36:28 +0000 (0:00:00.369) 0:00:22.564 ****** 2026-02-17 03:36:31.187036 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187044 | orchestrator | 2026-02-17 03:36:31.187052 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-17 03:36:31.187060 | orchestrator | Tuesday 17 February 2026 03:36:29 +0000 (0:00:00.158) 0:00:22.723 ****** 2026-02-17 03:36:31.187068 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187076 | orchestrator | 2026-02-17 03:36:31.187083 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-17 03:36:31.187091 | orchestrator | Tuesday 17 February 2026 03:36:29 +0000 (0:00:00.152) 0:00:22.875 ****** 2026-02-17 03:36:31.187104 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187111 | orchestrator | 2026-02-17 03:36:31.187119 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-17 03:36:31.187127 | orchestrator | Tuesday 17 February 2026 03:36:29 +0000 (0:00:00.169) 0:00:23.045 ****** 2026-02-17 03:36:31.187134 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187141 | orchestrator | 2026-02-17 03:36:31.187152 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-17 03:36:31.187159 | orchestrator | Tuesday 17 February 2026 03:36:29 +0000 (0:00:00.153) 0:00:23.199 ****** 2026-02-17 03:36:31.187166 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187172 | orchestrator | 2026-02-17 03:36:31.187179 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-17 03:36:31.187186 | orchestrator | Tuesday 17 February 2026 03:36:29 +0000 (0:00:00.141) 0:00:23.341 ****** 2026-02-17 03:36:31.187193 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187199 | orchestrator | 2026-02-17 03:36:31.187206 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-17 03:36:31.187213 | orchestrator | Tuesday 17 February 2026 03:36:29 +0000 (0:00:00.147) 0:00:23.489 ****** 2026-02-17 03:36:31.187221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:31.187229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:31.187236 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187243 | orchestrator | 2026-02-17 03:36:31.187250 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-17 03:36:31.187256 | orchestrator | Tuesday 17 February 2026 03:36:30 +0000 (0:00:00.184) 0:00:23.673 ****** 2026-02-17 03:36:31.187263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:31.187270 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:31.187277 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187284 | orchestrator | 2026-02-17 03:36:31.187290 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-17 03:36:31.187297 | orchestrator | Tuesday 17 February 2026 03:36:30 +0000 (0:00:00.189) 0:00:23.863 ****** 2026-02-17 03:36:31.187304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:31.187311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:31.187352 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187359 | orchestrator | 2026-02-17 03:36:31.187366 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-17 03:36:31.187373 | orchestrator | Tuesday 17 February 2026 03:36:30 +0000 (0:00:00.156) 0:00:24.019 ****** 2026-02-17 03:36:31.187380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:31.187386 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:31.187393 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187400 | orchestrator | 2026-02-17 03:36:31.187407 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-17 03:36:31.187413 | orchestrator | Tuesday 17 February 2026 03:36:30 +0000 (0:00:00.168) 0:00:24.187 ****** 2026-02-17 03:36:31.187425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:31.187432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:31.187439 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:31.187446 | orchestrator | 2026-02-17 03:36:31.187452 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-17 03:36:31.187459 | orchestrator | Tuesday 17 February 2026 03:36:31 +0000 (0:00:00.415) 0:00:24.602 ****** 2026-02-17 03:36:31.187471 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:36.923558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:36.923673 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:36.923689 | orchestrator | 2026-02-17 03:36:36.923702 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-17 03:36:36.923716 | orchestrator | Tuesday 17 February 2026 03:36:31 +0000 (0:00:00.163) 0:00:24.765 ****** 2026-02-17 03:36:36.923727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:36.923739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:36.923750 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:36.923761 | orchestrator | 2026-02-17 03:36:36.923791 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-17 03:36:36.923803 | orchestrator | Tuesday 17 February 2026 03:36:31 +0000 (0:00:00.183) 0:00:24.949 ****** 2026-02-17 03:36:36.923814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:36.923825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:36.923836 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:36.923847 | orchestrator | 2026-02-17 03:36:36.923860 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-17 03:36:36.923879 | orchestrator | Tuesday 17 February 2026 03:36:31 +0000 (0:00:00.178) 0:00:25.127 ****** 2026-02-17 03:36:36.923897 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:36.923917 | orchestrator | 2026-02-17 03:36:36.923935 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-17 03:36:36.923954 | orchestrator | Tuesday 17 February 2026 03:36:32 +0000 (0:00:00.559) 0:00:25.686 ****** 2026-02-17 03:36:36.923974 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:36.923994 | orchestrator | 2026-02-17 03:36:36.924015 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-17 03:36:36.924036 | orchestrator | Tuesday 17 February 2026 03:36:32 +0000 (0:00:00.552) 0:00:26.239 ****** 2026-02-17 03:36:36.924057 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:36:36.924077 | orchestrator | 2026-02-17 03:36:36.924096 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-17 03:36:36.924118 | orchestrator | Tuesday 17 February 2026 03:36:32 +0000 (0:00:00.155) 0:00:26.394 ****** 2026-02-17 03:36:36.924137 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'vg_name': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}) 2026-02-17 03:36:36.924159 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'vg_name': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}) 2026-02-17 03:36:36.924205 | orchestrator | 2026-02-17 03:36:36.924226 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-17 03:36:36.924246 | orchestrator | Tuesday 17 February 2026 03:36:33 +0000 (0:00:00.203) 0:00:26.598 ****** 2026-02-17 03:36:36.924266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:36.924288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:36.924308 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:36.924508 | orchestrator | 2026-02-17 03:36:36.924541 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-17 03:36:36.924552 | orchestrator | Tuesday 17 February 2026 03:36:33 +0000 (0:00:00.182) 0:00:26.780 ****** 2026-02-17 03:36:36.924563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:36.924575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:36.924585 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:36.924596 | orchestrator | 2026-02-17 03:36:36.924607 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-17 03:36:36.924618 | orchestrator | Tuesday 17 February 2026 03:36:33 +0000 (0:00:00.171) 0:00:26.953 ****** 2026-02-17 03:36:36.924629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 03:36:36.924640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 03:36:36.924650 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:36:36.924661 | orchestrator | 2026-02-17 03:36:36.924672 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-17 03:36:36.924683 | orchestrator | Tuesday 17 February 2026 03:36:33 +0000 (0:00:00.166) 0:00:27.119 ****** 2026-02-17 03:36:36.924719 | orchestrator | ok: [testbed-node-3] => { 2026-02-17 03:36:36.924732 | orchestrator |  "lvm_report": { 2026-02-17 03:36:36.924743 | orchestrator |  "lv": [ 2026-02-17 03:36:36.924753 | orchestrator |  { 2026-02-17 03:36:36.924765 | orchestrator |  "lv_name": "osd-block-366ad200-d272-50e2-9bbd-3174591b235f", 2026-02-17 03:36:36.924776 | orchestrator |  "vg_name": "ceph-366ad200-d272-50e2-9bbd-3174591b235f" 2026-02-17 03:36:36.924787 | orchestrator |  }, 2026-02-17 03:36:36.924798 | orchestrator |  { 2026-02-17 03:36:36.924809 | orchestrator |  "lv_name": "osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3", 2026-02-17 03:36:36.924820 | orchestrator |  "vg_name": "ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3" 2026-02-17 03:36:36.924831 | orchestrator |  } 2026-02-17 03:36:36.924842 | orchestrator |  ], 2026-02-17 03:36:36.924852 | orchestrator |  "pv": [ 2026-02-17 03:36:36.924863 | orchestrator |  { 2026-02-17 03:36:36.924874 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-17 03:36:36.924885 | orchestrator |  "vg_name": "ceph-366ad200-d272-50e2-9bbd-3174591b235f" 2026-02-17 03:36:36.924896 | orchestrator |  }, 2026-02-17 03:36:36.924906 | orchestrator |  { 2026-02-17 03:36:36.924927 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-17 03:36:36.924938 | orchestrator |  "vg_name": "ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3" 2026-02-17 03:36:36.924949 | orchestrator |  } 2026-02-17 03:36:36.924960 | orchestrator |  ] 2026-02-17 03:36:36.924971 | orchestrator |  } 2026-02-17 03:36:36.924988 | orchestrator | } 2026-02-17 03:36:36.925023 | orchestrator | 2026-02-17 03:36:36.925042 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-17 03:36:36.925061 | orchestrator | 2026-02-17 03:36:36.925079 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-17 03:36:36.925094 | orchestrator | Tuesday 17 February 2026 03:36:34 +0000 (0:00:00.546) 0:00:27.666 ****** 2026-02-17 03:36:36.925105 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-17 03:36:36.925116 | orchestrator | 2026-02-17 03:36:36.925127 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-17 03:36:36.925138 | orchestrator | Tuesday 17 February 2026 03:36:34 +0000 (0:00:00.279) 0:00:27.945 ****** 2026-02-17 03:36:36.925149 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:36.925160 | orchestrator | 2026-02-17 03:36:36.925170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:36.925181 | orchestrator | Tuesday 17 February 2026 03:36:34 +0000 (0:00:00.267) 0:00:28.213 ****** 2026-02-17 03:36:36.925192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-17 03:36:36.925203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-17 03:36:36.925214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-17 03:36:36.925224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-17 03:36:36.925235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-17 03:36:36.925246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-17 03:36:36.925257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-17 03:36:36.925268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-17 03:36:36.925278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-17 03:36:36.925289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-17 03:36:36.925299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-17 03:36:36.925310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-17 03:36:36.925373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-17 03:36:36.925387 | orchestrator | 2026-02-17 03:36:36.925398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:36.925409 | orchestrator | Tuesday 17 February 2026 03:36:35 +0000 (0:00:00.462) 0:00:28.675 ****** 2026-02-17 03:36:36.925420 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:36.925431 | orchestrator | 2026-02-17 03:36:36.925442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:36.925453 | orchestrator | Tuesday 17 February 2026 03:36:35 +0000 (0:00:00.219) 0:00:28.895 ****** 2026-02-17 03:36:36.925464 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:36.925474 | orchestrator | 2026-02-17 03:36:36.925485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:36.925496 | orchestrator | Tuesday 17 February 2026 03:36:35 +0000 (0:00:00.209) 0:00:29.104 ****** 2026-02-17 03:36:36.925507 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:36.925518 | orchestrator | 2026-02-17 03:36:36.925529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:36.925540 | orchestrator | Tuesday 17 February 2026 03:36:35 +0000 (0:00:00.241) 0:00:29.346 ****** 2026-02-17 03:36:36.925551 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:36.925561 | orchestrator | 2026-02-17 03:36:36.925572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:36.925583 | orchestrator | Tuesday 17 February 2026 03:36:35 +0000 (0:00:00.232) 0:00:29.579 ****** 2026-02-17 03:36:36.925602 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:36.925613 | orchestrator | 2026-02-17 03:36:36.925628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:36.925647 | orchestrator | Tuesday 17 February 2026 03:36:36 +0000 (0:00:00.220) 0:00:29.799 ****** 2026-02-17 03:36:36.925666 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:36.925683 | orchestrator | 2026-02-17 03:36:36.925709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:47.937125 | orchestrator | Tuesday 17 February 2026 03:36:36 +0000 (0:00:00.698) 0:00:30.497 ****** 2026-02-17 03:36:47.937231 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.937245 | orchestrator | 2026-02-17 03:36:47.937256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:47.937265 | orchestrator | Tuesday 17 February 2026 03:36:37 +0000 (0:00:00.217) 0:00:30.715 ****** 2026-02-17 03:36:47.937274 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.937283 | orchestrator | 2026-02-17 03:36:47.937292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:47.937301 | orchestrator | Tuesday 17 February 2026 03:36:37 +0000 (0:00:00.218) 0:00:30.933 ****** 2026-02-17 03:36:47.937311 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15) 2026-02-17 03:36:47.937321 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15) 2026-02-17 03:36:47.937375 | orchestrator | 2026-02-17 03:36:47.937398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:47.937407 | orchestrator | Tuesday 17 February 2026 03:36:37 +0000 (0:00:00.470) 0:00:31.403 ****** 2026-02-17 03:36:47.937416 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856) 2026-02-17 03:36:47.937425 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856) 2026-02-17 03:36:47.937434 | orchestrator | 2026-02-17 03:36:47.937443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:47.937451 | orchestrator | Tuesday 17 February 2026 03:36:38 +0000 (0:00:00.474) 0:00:31.878 ****** 2026-02-17 03:36:47.937460 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67) 2026-02-17 03:36:47.937469 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67) 2026-02-17 03:36:47.937478 | orchestrator | 2026-02-17 03:36:47.937487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:47.937496 | orchestrator | Tuesday 17 February 2026 03:36:38 +0000 (0:00:00.490) 0:00:32.368 ****** 2026-02-17 03:36:47.937504 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416) 2026-02-17 03:36:47.937513 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416) 2026-02-17 03:36:47.937522 | orchestrator | 2026-02-17 03:36:47.937531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:36:47.937540 | orchestrator | Tuesday 17 February 2026 03:36:39 +0000 (0:00:00.460) 0:00:32.829 ****** 2026-02-17 03:36:47.937548 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-17 03:36:47.937557 | orchestrator | 2026-02-17 03:36:47.937566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.937575 | orchestrator | Tuesday 17 February 2026 03:36:39 +0000 (0:00:00.376) 0:00:33.205 ****** 2026-02-17 03:36:47.937583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-17 03:36:47.937593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-17 03:36:47.937602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-17 03:36:47.937635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-17 03:36:47.937645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-17 03:36:47.937656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-17 03:36:47.937665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-17 03:36:47.937675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-17 03:36:47.937685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-17 03:36:47.937694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-17 03:36:47.937704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-17 03:36:47.937713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-17 03:36:47.937723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-17 03:36:47.937732 | orchestrator | 2026-02-17 03:36:47.937742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.937752 | orchestrator | Tuesday 17 February 2026 03:36:40 +0000 (0:00:00.521) 0:00:33.726 ****** 2026-02-17 03:36:47.937762 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.937771 | orchestrator | 2026-02-17 03:36:47.937781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.937791 | orchestrator | Tuesday 17 February 2026 03:36:40 +0000 (0:00:00.230) 0:00:33.957 ****** 2026-02-17 03:36:47.937801 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.937811 | orchestrator | 2026-02-17 03:36:47.937821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.937836 | orchestrator | Tuesday 17 February 2026 03:36:40 +0000 (0:00:00.213) 0:00:34.171 ****** 2026-02-17 03:36:47.937859 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.937877 | orchestrator | 2026-02-17 03:36:47.937912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.937927 | orchestrator | Tuesday 17 February 2026 03:36:41 +0000 (0:00:00.707) 0:00:34.879 ****** 2026-02-17 03:36:47.937941 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.937956 | orchestrator | 2026-02-17 03:36:47.937970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.937984 | orchestrator | Tuesday 17 February 2026 03:36:41 +0000 (0:00:00.226) 0:00:35.106 ****** 2026-02-17 03:36:47.938000 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938057 | orchestrator | 2026-02-17 03:36:47.938076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.938092 | orchestrator | Tuesday 17 February 2026 03:36:41 +0000 (0:00:00.235) 0:00:35.341 ****** 2026-02-17 03:36:47.938106 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938120 | orchestrator | 2026-02-17 03:36:47.938135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.938150 | orchestrator | Tuesday 17 February 2026 03:36:41 +0000 (0:00:00.223) 0:00:35.564 ****** 2026-02-17 03:36:47.938173 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938188 | orchestrator | 2026-02-17 03:36:47.938203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.938217 | orchestrator | Tuesday 17 February 2026 03:36:42 +0000 (0:00:00.247) 0:00:35.812 ****** 2026-02-17 03:36:47.938231 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938245 | orchestrator | 2026-02-17 03:36:47.938260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.938274 | orchestrator | Tuesday 17 February 2026 03:36:42 +0000 (0:00:00.224) 0:00:36.037 ****** 2026-02-17 03:36:47.938289 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-17 03:36:47.938319 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-17 03:36:47.938362 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-17 03:36:47.938377 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-17 03:36:47.938392 | orchestrator | 2026-02-17 03:36:47.938406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.938421 | orchestrator | Tuesday 17 February 2026 03:36:43 +0000 (0:00:00.683) 0:00:36.721 ****** 2026-02-17 03:36:47.938436 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938450 | orchestrator | 2026-02-17 03:36:47.938465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.938474 | orchestrator | Tuesday 17 February 2026 03:36:43 +0000 (0:00:00.218) 0:00:36.940 ****** 2026-02-17 03:36:47.938482 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938491 | orchestrator | 2026-02-17 03:36:47.938499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.938508 | orchestrator | Tuesday 17 February 2026 03:36:43 +0000 (0:00:00.213) 0:00:37.153 ****** 2026-02-17 03:36:47.938516 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938525 | orchestrator | 2026-02-17 03:36:47.938534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:36:47.938542 | orchestrator | Tuesday 17 February 2026 03:36:43 +0000 (0:00:00.216) 0:00:37.369 ****** 2026-02-17 03:36:47.938551 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938559 | orchestrator | 2026-02-17 03:36:47.938568 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-17 03:36:47.938577 | orchestrator | Tuesday 17 February 2026 03:36:44 +0000 (0:00:00.224) 0:00:37.594 ****** 2026-02-17 03:36:47.938585 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938607 | orchestrator | 2026-02-17 03:36:47.938616 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-17 03:36:47.938625 | orchestrator | Tuesday 17 February 2026 03:36:44 +0000 (0:00:00.393) 0:00:37.987 ****** 2026-02-17 03:36:47.938643 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}}) 2026-02-17 03:36:47.938653 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8aff4da6-f81a-563d-a807-caa30e1cb6b0'}}) 2026-02-17 03:36:47.938662 | orchestrator | 2026-02-17 03:36:47.938670 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-17 03:36:47.938679 | orchestrator | Tuesday 17 February 2026 03:36:44 +0000 (0:00:00.200) 0:00:38.187 ****** 2026-02-17 03:36:47.938688 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}) 2026-02-17 03:36:47.938701 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}) 2026-02-17 03:36:47.938716 | orchestrator | 2026-02-17 03:36:47.938730 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-17 03:36:47.938744 | orchestrator | Tuesday 17 February 2026 03:36:46 +0000 (0:00:01.810) 0:00:39.997 ****** 2026-02-17 03:36:47.938758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:47.938775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:47.938789 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:47.938805 | orchestrator | 2026-02-17 03:36:47.938814 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-17 03:36:47.938823 | orchestrator | Tuesday 17 February 2026 03:36:46 +0000 (0:00:00.160) 0:00:40.158 ****** 2026-02-17 03:36:47.938832 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}) 2026-02-17 03:36:47.938863 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}) 2026-02-17 03:36:54.029658 | orchestrator | 2026-02-17 03:36:54.029799 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-17 03:36:54.029829 | orchestrator | Tuesday 17 February 2026 03:36:47 +0000 (0:00:01.347) 0:00:41.505 ****** 2026-02-17 03:36:54.029851 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:54.029873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:54.029894 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.029916 | orchestrator | 2026-02-17 03:36:54.029956 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-17 03:36:54.029969 | orchestrator | Tuesday 17 February 2026 03:36:48 +0000 (0:00:00.179) 0:00:41.685 ****** 2026-02-17 03:36:54.029980 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.029991 | orchestrator | 2026-02-17 03:36:54.030002 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-17 03:36:54.030069 | orchestrator | Tuesday 17 February 2026 03:36:48 +0000 (0:00:00.167) 0:00:41.852 ****** 2026-02-17 03:36:54.030085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:54.030097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:54.030108 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030119 | orchestrator | 2026-02-17 03:36:54.030130 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-17 03:36:54.030141 | orchestrator | Tuesday 17 February 2026 03:36:48 +0000 (0:00:00.180) 0:00:42.033 ****** 2026-02-17 03:36:54.030169 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030180 | orchestrator | 2026-02-17 03:36:54.030206 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-17 03:36:54.030219 | orchestrator | Tuesday 17 February 2026 03:36:48 +0000 (0:00:00.153) 0:00:42.187 ****** 2026-02-17 03:36:54.030232 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:54.030245 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:54.030258 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030272 | orchestrator | 2026-02-17 03:36:54.030284 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-17 03:36:54.030298 | orchestrator | Tuesday 17 February 2026 03:36:48 +0000 (0:00:00.174) 0:00:42.361 ****** 2026-02-17 03:36:54.030310 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030323 | orchestrator | 2026-02-17 03:36:54.030372 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-17 03:36:54.030387 | orchestrator | Tuesday 17 February 2026 03:36:48 +0000 (0:00:00.159) 0:00:42.521 ****** 2026-02-17 03:36:54.030400 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:54.030413 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:54.030427 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030439 | orchestrator | 2026-02-17 03:36:54.030452 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-17 03:36:54.030515 | orchestrator | Tuesday 17 February 2026 03:36:49 +0000 (0:00:00.152) 0:00:42.674 ****** 2026-02-17 03:36:54.030539 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:54.030559 | orchestrator | 2026-02-17 03:36:54.030579 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-17 03:36:54.030600 | orchestrator | Tuesday 17 February 2026 03:36:49 +0000 (0:00:00.127) 0:00:42.801 ****** 2026-02-17 03:36:54.030621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:54.030643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:54.030663 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030683 | orchestrator | 2026-02-17 03:36:54.030695 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-17 03:36:54.030706 | orchestrator | Tuesday 17 February 2026 03:36:49 +0000 (0:00:00.421) 0:00:43.222 ****** 2026-02-17 03:36:54.030717 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:54.030728 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:54.030738 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030749 | orchestrator | 2026-02-17 03:36:54.030760 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-17 03:36:54.030796 | orchestrator | Tuesday 17 February 2026 03:36:49 +0000 (0:00:00.185) 0:00:43.408 ****** 2026-02-17 03:36:54.030807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:54.030818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:54.030830 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030840 | orchestrator | 2026-02-17 03:36:54.030851 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-17 03:36:54.030862 | orchestrator | Tuesday 17 February 2026 03:36:49 +0000 (0:00:00.168) 0:00:43.577 ****** 2026-02-17 03:36:54.030885 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030912 | orchestrator | 2026-02-17 03:36:54.030934 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-17 03:36:54.030952 | orchestrator | Tuesday 17 February 2026 03:36:50 +0000 (0:00:00.154) 0:00:43.731 ****** 2026-02-17 03:36:54.030969 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.030986 | orchestrator | 2026-02-17 03:36:54.031005 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-17 03:36:54.031023 | orchestrator | Tuesday 17 February 2026 03:36:50 +0000 (0:00:00.186) 0:00:43.917 ****** 2026-02-17 03:36:54.031057 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.031076 | orchestrator | 2026-02-17 03:36:54.031096 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-17 03:36:54.031114 | orchestrator | Tuesday 17 February 2026 03:36:50 +0000 (0:00:00.135) 0:00:44.052 ****** 2026-02-17 03:36:54.031133 | orchestrator | ok: [testbed-node-4] => { 2026-02-17 03:36:54.031146 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-17 03:36:54.031157 | orchestrator | } 2026-02-17 03:36:54.031169 | orchestrator | 2026-02-17 03:36:54.031179 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-17 03:36:54.031191 | orchestrator | Tuesday 17 February 2026 03:36:50 +0000 (0:00:00.165) 0:00:44.217 ****** 2026-02-17 03:36:54.031202 | orchestrator | ok: [testbed-node-4] => { 2026-02-17 03:36:54.031213 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-17 03:36:54.031237 | orchestrator | } 2026-02-17 03:36:54.031248 | orchestrator | 2026-02-17 03:36:54.031259 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-17 03:36:54.031270 | orchestrator | Tuesday 17 February 2026 03:36:50 +0000 (0:00:00.149) 0:00:44.366 ****** 2026-02-17 03:36:54.031281 | orchestrator | ok: [testbed-node-4] => { 2026-02-17 03:36:54.031292 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-17 03:36:54.031304 | orchestrator | } 2026-02-17 03:36:54.031315 | orchestrator | 2026-02-17 03:36:54.031326 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-17 03:36:54.031377 | orchestrator | Tuesday 17 February 2026 03:36:50 +0000 (0:00:00.180) 0:00:44.546 ****** 2026-02-17 03:36:54.031389 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:54.031400 | orchestrator | 2026-02-17 03:36:54.031410 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-17 03:36:54.031421 | orchestrator | Tuesday 17 February 2026 03:36:51 +0000 (0:00:00.540) 0:00:45.087 ****** 2026-02-17 03:36:54.031432 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:54.031443 | orchestrator | 2026-02-17 03:36:54.031454 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-17 03:36:54.031465 | orchestrator | Tuesday 17 February 2026 03:36:52 +0000 (0:00:00.530) 0:00:45.617 ****** 2026-02-17 03:36:54.031476 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:54.031488 | orchestrator | 2026-02-17 03:36:54.031498 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-17 03:36:54.031535 | orchestrator | Tuesday 17 February 2026 03:36:52 +0000 (0:00:00.527) 0:00:46.145 ****** 2026-02-17 03:36:54.031546 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:54.031557 | orchestrator | 2026-02-17 03:36:54.031568 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-17 03:36:54.031579 | orchestrator | Tuesday 17 February 2026 03:36:52 +0000 (0:00:00.381) 0:00:46.527 ****** 2026-02-17 03:36:54.031591 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.031602 | orchestrator | 2026-02-17 03:36:54.031613 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-17 03:36:54.031624 | orchestrator | Tuesday 17 February 2026 03:36:53 +0000 (0:00:00.137) 0:00:46.665 ****** 2026-02-17 03:36:54.031635 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.031646 | orchestrator | 2026-02-17 03:36:54.031657 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-17 03:36:54.031668 | orchestrator | Tuesday 17 February 2026 03:36:53 +0000 (0:00:00.139) 0:00:46.804 ****** 2026-02-17 03:36:54.031679 | orchestrator | ok: [testbed-node-4] => { 2026-02-17 03:36:54.031690 | orchestrator |  "vgs_report": { 2026-02-17 03:36:54.031701 | orchestrator |  "vg": [] 2026-02-17 03:36:54.031712 | orchestrator |  } 2026-02-17 03:36:54.031724 | orchestrator | } 2026-02-17 03:36:54.031735 | orchestrator | 2026-02-17 03:36:54.031746 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-17 03:36:54.031757 | orchestrator | Tuesday 17 February 2026 03:36:53 +0000 (0:00:00.173) 0:00:46.978 ****** 2026-02-17 03:36:54.031768 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.031779 | orchestrator | 2026-02-17 03:36:54.031790 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-17 03:36:54.031801 | orchestrator | Tuesday 17 February 2026 03:36:53 +0000 (0:00:00.152) 0:00:47.130 ****** 2026-02-17 03:36:54.031812 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.031823 | orchestrator | 2026-02-17 03:36:54.031834 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-17 03:36:54.031845 | orchestrator | Tuesday 17 February 2026 03:36:53 +0000 (0:00:00.157) 0:00:47.288 ****** 2026-02-17 03:36:54.031856 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.031867 | orchestrator | 2026-02-17 03:36:54.031878 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-17 03:36:54.031889 | orchestrator | Tuesday 17 February 2026 03:36:53 +0000 (0:00:00.153) 0:00:47.441 ****** 2026-02-17 03:36:54.031909 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:54.031920 | orchestrator | 2026-02-17 03:36:54.031943 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-17 03:36:59.225697 | orchestrator | Tuesday 17 February 2026 03:36:54 +0000 (0:00:00.163) 0:00:47.605 ****** 2026-02-17 03:36:59.225839 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.225855 | orchestrator | 2026-02-17 03:36:59.225865 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-17 03:36:59.225875 | orchestrator | Tuesday 17 February 2026 03:36:54 +0000 (0:00:00.147) 0:00:47.752 ****** 2026-02-17 03:36:59.225884 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.225893 | orchestrator | 2026-02-17 03:36:59.225902 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-17 03:36:59.225911 | orchestrator | Tuesday 17 February 2026 03:36:54 +0000 (0:00:00.164) 0:00:47.917 ****** 2026-02-17 03:36:59.225920 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.225928 | orchestrator | 2026-02-17 03:36:59.225957 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-17 03:36:59.225966 | orchestrator | Tuesday 17 February 2026 03:36:54 +0000 (0:00:00.146) 0:00:48.063 ****** 2026-02-17 03:36:59.225975 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.225983 | orchestrator | 2026-02-17 03:36:59.225992 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-17 03:36:59.226001 | orchestrator | Tuesday 17 February 2026 03:36:54 +0000 (0:00:00.139) 0:00:48.203 ****** 2026-02-17 03:36:59.226010 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226070 | orchestrator | 2026-02-17 03:36:59.226079 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-17 03:36:59.226088 | orchestrator | Tuesday 17 February 2026 03:36:55 +0000 (0:00:00.414) 0:00:48.617 ****** 2026-02-17 03:36:59.226097 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226106 | orchestrator | 2026-02-17 03:36:59.226114 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-17 03:36:59.226124 | orchestrator | Tuesday 17 February 2026 03:36:55 +0000 (0:00:00.164) 0:00:48.782 ****** 2026-02-17 03:36:59.226133 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226142 | orchestrator | 2026-02-17 03:36:59.226151 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-17 03:36:59.226160 | orchestrator | Tuesday 17 February 2026 03:36:55 +0000 (0:00:00.137) 0:00:48.919 ****** 2026-02-17 03:36:59.226168 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226177 | orchestrator | 2026-02-17 03:36:59.226186 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-17 03:36:59.226196 | orchestrator | Tuesday 17 February 2026 03:36:55 +0000 (0:00:00.154) 0:00:49.074 ****** 2026-02-17 03:36:59.226205 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226215 | orchestrator | 2026-02-17 03:36:59.226225 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-17 03:36:59.226235 | orchestrator | Tuesday 17 February 2026 03:36:55 +0000 (0:00:00.177) 0:00:49.251 ****** 2026-02-17 03:36:59.226245 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226254 | orchestrator | 2026-02-17 03:36:59.226265 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-17 03:36:59.226275 | orchestrator | Tuesday 17 February 2026 03:36:55 +0000 (0:00:00.153) 0:00:49.405 ****** 2026-02-17 03:36:59.226287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.226299 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.226309 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226319 | orchestrator | 2026-02-17 03:36:59.226329 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-17 03:36:59.226407 | orchestrator | Tuesday 17 February 2026 03:36:56 +0000 (0:00:00.188) 0:00:49.593 ****** 2026-02-17 03:36:59.226425 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.226440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.226453 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226464 | orchestrator | 2026-02-17 03:36:59.226474 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-17 03:36:59.226484 | orchestrator | Tuesday 17 February 2026 03:36:56 +0000 (0:00:00.160) 0:00:49.753 ****** 2026-02-17 03:36:59.226495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.226505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.226517 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226530 | orchestrator | 2026-02-17 03:36:59.226543 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-17 03:36:59.226555 | orchestrator | Tuesday 17 February 2026 03:36:56 +0000 (0:00:00.189) 0:00:49.943 ****** 2026-02-17 03:36:59.226567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.226580 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.226591 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226601 | orchestrator | 2026-02-17 03:36:59.226634 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-17 03:36:59.226646 | orchestrator | Tuesday 17 February 2026 03:36:56 +0000 (0:00:00.155) 0:00:50.098 ****** 2026-02-17 03:36:59.226657 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.226668 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.226679 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226690 | orchestrator | 2026-02-17 03:36:59.226708 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-17 03:36:59.226719 | orchestrator | Tuesday 17 February 2026 03:36:56 +0000 (0:00:00.173) 0:00:50.271 ****** 2026-02-17 03:36:59.226730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.226741 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.226752 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226762 | orchestrator | 2026-02-17 03:36:59.226773 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-17 03:36:59.226784 | orchestrator | Tuesday 17 February 2026 03:36:56 +0000 (0:00:00.161) 0:00:50.433 ****** 2026-02-17 03:36:59.226794 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.226805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.226816 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226835 | orchestrator | 2026-02-17 03:36:59.226846 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-17 03:36:59.226857 | orchestrator | Tuesday 17 February 2026 03:36:57 +0000 (0:00:00.407) 0:00:50.840 ****** 2026-02-17 03:36:59.226867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.226878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.226889 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.226899 | orchestrator | 2026-02-17 03:36:59.226910 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-17 03:36:59.226921 | orchestrator | Tuesday 17 February 2026 03:36:57 +0000 (0:00:00.183) 0:00:51.023 ****** 2026-02-17 03:36:59.226932 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:59.226942 | orchestrator | 2026-02-17 03:36:59.226953 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-17 03:36:59.226964 | orchestrator | Tuesday 17 February 2026 03:36:57 +0000 (0:00:00.547) 0:00:51.571 ****** 2026-02-17 03:36:59.226974 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:59.226985 | orchestrator | 2026-02-17 03:36:59.226996 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-17 03:36:59.227006 | orchestrator | Tuesday 17 February 2026 03:36:58 +0000 (0:00:00.519) 0:00:52.091 ****** 2026-02-17 03:36:59.227017 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:36:59.227028 | orchestrator | 2026-02-17 03:36:59.227038 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-17 03:36:59.227049 | orchestrator | Tuesday 17 February 2026 03:36:58 +0000 (0:00:00.177) 0:00:52.268 ****** 2026-02-17 03:36:59.227060 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'vg_name': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}) 2026-02-17 03:36:59.227073 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'vg_name': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}) 2026-02-17 03:36:59.227084 | orchestrator | 2026-02-17 03:36:59.227095 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-17 03:36:59.227105 | orchestrator | Tuesday 17 February 2026 03:36:58 +0000 (0:00:00.184) 0:00:52.453 ****** 2026-02-17 03:36:59.227116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.227127 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:36:59.227138 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:36:59.227149 | orchestrator | 2026-02-17 03:36:59.227160 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-17 03:36:59.227170 | orchestrator | Tuesday 17 February 2026 03:36:59 +0000 (0:00:00.179) 0:00:52.633 ****** 2026-02-17 03:36:59.227181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:36:59.227199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:37:06.165181 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:37:06.165322 | orchestrator | 2026-02-17 03:37:06.165408 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-17 03:37:06.165431 | orchestrator | Tuesday 17 February 2026 03:36:59 +0000 (0:00:00.165) 0:00:52.799 ****** 2026-02-17 03:37:06.165450 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 03:37:06.165523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 03:37:06.165543 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:37:06.165561 | orchestrator | 2026-02-17 03:37:06.165573 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-17 03:37:06.165584 | orchestrator | Tuesday 17 February 2026 03:36:59 +0000 (0:00:00.165) 0:00:52.964 ****** 2026-02-17 03:37:06.165595 | orchestrator | ok: [testbed-node-4] => { 2026-02-17 03:37:06.165606 | orchestrator |  "lvm_report": { 2026-02-17 03:37:06.165618 | orchestrator |  "lv": [ 2026-02-17 03:37:06.165631 | orchestrator |  { 2026-02-17 03:37:06.165644 | orchestrator |  "lv_name": "osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b", 2026-02-17 03:37:06.165657 | orchestrator |  "vg_name": "ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b" 2026-02-17 03:37:06.165670 | orchestrator |  }, 2026-02-17 03:37:06.165682 | orchestrator |  { 2026-02-17 03:37:06.165695 | orchestrator |  "lv_name": "osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0", 2026-02-17 03:37:06.165707 | orchestrator |  "vg_name": "ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0" 2026-02-17 03:37:06.165719 | orchestrator |  } 2026-02-17 03:37:06.165738 | orchestrator |  ], 2026-02-17 03:37:06.165756 | orchestrator |  "pv": [ 2026-02-17 03:37:06.165774 | orchestrator |  { 2026-02-17 03:37:06.165790 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-17 03:37:06.165810 | orchestrator |  "vg_name": "ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b" 2026-02-17 03:37:06.165831 | orchestrator |  }, 2026-02-17 03:37:06.165850 | orchestrator |  { 2026-02-17 03:37:06.165871 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-17 03:37:06.165885 | orchestrator |  "vg_name": "ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0" 2026-02-17 03:37:06.165897 | orchestrator |  } 2026-02-17 03:37:06.165910 | orchestrator |  ] 2026-02-17 03:37:06.165923 | orchestrator |  } 2026-02-17 03:37:06.165936 | orchestrator | } 2026-02-17 03:37:06.165948 | orchestrator | 2026-02-17 03:37:06.165960 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-17 03:37:06.165973 | orchestrator | 2026-02-17 03:37:06.165985 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-17 03:37:06.165996 | orchestrator | Tuesday 17 February 2026 03:36:59 +0000 (0:00:00.313) 0:00:53.278 ****** 2026-02-17 03:37:06.166006 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-17 03:37:06.166113 | orchestrator | 2026-02-17 03:37:06.166129 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-17 03:37:06.166141 | orchestrator | Tuesday 17 February 2026 03:37:00 +0000 (0:00:00.798) 0:00:54.077 ****** 2026-02-17 03:37:06.166152 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:06.166163 | orchestrator | 2026-02-17 03:37:06.166174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166184 | orchestrator | Tuesday 17 February 2026 03:37:00 +0000 (0:00:00.263) 0:00:54.341 ****** 2026-02-17 03:37:06.166195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-17 03:37:06.166206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-17 03:37:06.166217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-17 03:37:06.166228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-17 03:37:06.166238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-17 03:37:06.166249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-17 03:37:06.166260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-17 03:37:06.166286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-17 03:37:06.166296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-17 03:37:06.166307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-17 03:37:06.166318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-17 03:37:06.166329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-17 03:37:06.166362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-17 03:37:06.166374 | orchestrator | 2026-02-17 03:37:06.166384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166395 | orchestrator | Tuesday 17 February 2026 03:37:01 +0000 (0:00:00.423) 0:00:54.764 ****** 2026-02-17 03:37:06.166406 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:06.166417 | orchestrator | 2026-02-17 03:37:06.166428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166439 | orchestrator | Tuesday 17 February 2026 03:37:01 +0000 (0:00:00.229) 0:00:54.994 ****** 2026-02-17 03:37:06.166449 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:06.166460 | orchestrator | 2026-02-17 03:37:06.166471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166504 | orchestrator | Tuesday 17 February 2026 03:37:01 +0000 (0:00:00.229) 0:00:55.223 ****** 2026-02-17 03:37:06.166516 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:06.166527 | orchestrator | 2026-02-17 03:37:06.166538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166548 | orchestrator | Tuesday 17 February 2026 03:37:01 +0000 (0:00:00.227) 0:00:55.451 ****** 2026-02-17 03:37:06.166559 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:06.166570 | orchestrator | 2026-02-17 03:37:06.166580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166602 | orchestrator | Tuesday 17 February 2026 03:37:02 +0000 (0:00:00.225) 0:00:55.676 ****** 2026-02-17 03:37:06.166613 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:06.166624 | orchestrator | 2026-02-17 03:37:06.166635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166646 | orchestrator | Tuesday 17 February 2026 03:37:02 +0000 (0:00:00.220) 0:00:55.896 ****** 2026-02-17 03:37:06.166657 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:06.166667 | orchestrator | 2026-02-17 03:37:06.166678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166689 | orchestrator | Tuesday 17 February 2026 03:37:02 +0000 (0:00:00.236) 0:00:56.133 ****** 2026-02-17 03:37:06.166700 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:06.166710 | orchestrator | 2026-02-17 03:37:06.166721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166732 | orchestrator | Tuesday 17 February 2026 03:37:02 +0000 (0:00:00.238) 0:00:56.371 ****** 2026-02-17 03:37:06.166742 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:06.166753 | orchestrator | 2026-02-17 03:37:06.166764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166774 | orchestrator | Tuesday 17 February 2026 03:37:02 +0000 (0:00:00.198) 0:00:56.570 ****** 2026-02-17 03:37:06.166785 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944) 2026-02-17 03:37:06.166798 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944) 2026-02-17 03:37:06.166808 | orchestrator | 2026-02-17 03:37:06.166819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166830 | orchestrator | Tuesday 17 February 2026 03:37:03 +0000 (0:00:00.926) 0:00:57.496 ****** 2026-02-17 03:37:06.166871 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86) 2026-02-17 03:37:06.166891 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86) 2026-02-17 03:37:06.166902 | orchestrator | 2026-02-17 03:37:06.166913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166924 | orchestrator | Tuesday 17 February 2026 03:37:04 +0000 (0:00:00.472) 0:00:57.969 ****** 2026-02-17 03:37:06.166934 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d) 2026-02-17 03:37:06.166945 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d) 2026-02-17 03:37:06.166956 | orchestrator | 2026-02-17 03:37:06.166967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.166978 | orchestrator | Tuesday 17 February 2026 03:37:04 +0000 (0:00:00.466) 0:00:58.436 ****** 2026-02-17 03:37:06.166989 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc) 2026-02-17 03:37:06.167000 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc) 2026-02-17 03:37:06.167011 | orchestrator | 2026-02-17 03:37:06.167022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-17 03:37:06.167032 | orchestrator | Tuesday 17 February 2026 03:37:05 +0000 (0:00:00.469) 0:00:58.905 ****** 2026-02-17 03:37:06.167043 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-17 03:37:06.167054 | orchestrator | 2026-02-17 03:37:06.167065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:06.167075 | orchestrator | Tuesday 17 February 2026 03:37:05 +0000 (0:00:00.377) 0:00:59.283 ****** 2026-02-17 03:37:06.167086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-17 03:37:06.167096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-17 03:37:06.167107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-17 03:37:06.167117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-17 03:37:06.167128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-17 03:37:06.167139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-17 03:37:06.167149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-17 03:37:06.167160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-17 03:37:06.167171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-17 03:37:06.167181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-17 03:37:06.167192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-17 03:37:06.167211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-17 03:37:15.653000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-17 03:37:15.653101 | orchestrator | 2026-02-17 03:37:15.653113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653121 | orchestrator | Tuesday 17 February 2026 03:37:06 +0000 (0:00:00.451) 0:00:59.734 ****** 2026-02-17 03:37:15.653128 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653136 | orchestrator | 2026-02-17 03:37:15.653143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653163 | orchestrator | Tuesday 17 February 2026 03:37:06 +0000 (0:00:00.235) 0:00:59.969 ****** 2026-02-17 03:37:15.653170 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653195 | orchestrator | 2026-02-17 03:37:15.653202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653208 | orchestrator | Tuesday 17 February 2026 03:37:06 +0000 (0:00:00.219) 0:01:00.188 ****** 2026-02-17 03:37:15.653214 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653220 | orchestrator | 2026-02-17 03:37:15.653226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653232 | orchestrator | Tuesday 17 February 2026 03:37:06 +0000 (0:00:00.220) 0:01:00.409 ****** 2026-02-17 03:37:15.653237 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653244 | orchestrator | 2026-02-17 03:37:15.653249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653255 | orchestrator | Tuesday 17 February 2026 03:37:07 +0000 (0:00:00.233) 0:01:00.643 ****** 2026-02-17 03:37:15.653262 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653267 | orchestrator | 2026-02-17 03:37:15.653274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653280 | orchestrator | Tuesday 17 February 2026 03:37:07 +0000 (0:00:00.730) 0:01:01.373 ****** 2026-02-17 03:37:15.653286 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653292 | orchestrator | 2026-02-17 03:37:15.653298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653304 | orchestrator | Tuesday 17 February 2026 03:37:08 +0000 (0:00:00.225) 0:01:01.599 ****** 2026-02-17 03:37:15.653310 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653316 | orchestrator | 2026-02-17 03:37:15.653323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653330 | orchestrator | Tuesday 17 February 2026 03:37:08 +0000 (0:00:00.233) 0:01:01.833 ****** 2026-02-17 03:37:15.653337 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653368 | orchestrator | 2026-02-17 03:37:15.653375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653381 | orchestrator | Tuesday 17 February 2026 03:37:08 +0000 (0:00:00.245) 0:01:02.079 ****** 2026-02-17 03:37:15.653387 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-17 03:37:15.653394 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-17 03:37:15.653400 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-17 03:37:15.653406 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-17 03:37:15.653411 | orchestrator | 2026-02-17 03:37:15.653416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653422 | orchestrator | Tuesday 17 February 2026 03:37:09 +0000 (0:00:00.729) 0:01:02.808 ****** 2026-02-17 03:37:15.653428 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653433 | orchestrator | 2026-02-17 03:37:15.653440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653445 | orchestrator | Tuesday 17 February 2026 03:37:09 +0000 (0:00:00.222) 0:01:03.030 ****** 2026-02-17 03:37:15.653448 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653452 | orchestrator | 2026-02-17 03:37:15.653456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653460 | orchestrator | Tuesday 17 February 2026 03:37:09 +0000 (0:00:00.221) 0:01:03.252 ****** 2026-02-17 03:37:15.653464 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653468 | orchestrator | 2026-02-17 03:37:15.653471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-17 03:37:15.653475 | orchestrator | Tuesday 17 February 2026 03:37:09 +0000 (0:00:00.226) 0:01:03.478 ****** 2026-02-17 03:37:15.653479 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653483 | orchestrator | 2026-02-17 03:37:15.653487 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-17 03:37:15.653491 | orchestrator | Tuesday 17 February 2026 03:37:10 +0000 (0:00:00.300) 0:01:03.779 ****** 2026-02-17 03:37:15.653495 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653498 | orchestrator | 2026-02-17 03:37:15.653509 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-17 03:37:15.653513 | orchestrator | Tuesday 17 February 2026 03:37:10 +0000 (0:00:00.153) 0:01:03.932 ****** 2026-02-17 03:37:15.653518 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '415e7a1a-a305-5338-824f-e9750ca5ebee'}}) 2026-02-17 03:37:15.653522 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67fd3cab-24d5-5329-b459-0f3a5a04c841'}}) 2026-02-17 03:37:15.653526 | orchestrator | 2026-02-17 03:37:15.653530 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-17 03:37:15.653534 | orchestrator | Tuesday 17 February 2026 03:37:10 +0000 (0:00:00.197) 0:01:04.130 ****** 2026-02-17 03:37:15.653538 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}) 2026-02-17 03:37:15.653544 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}) 2026-02-17 03:37:15.653547 | orchestrator | 2026-02-17 03:37:15.653551 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-17 03:37:15.653569 | orchestrator | Tuesday 17 February 2026 03:37:12 +0000 (0:00:01.852) 0:01:05.982 ****** 2026-02-17 03:37:15.653573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:15.653578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:15.653582 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653586 | orchestrator | 2026-02-17 03:37:15.653596 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-17 03:37:15.653600 | orchestrator | Tuesday 17 February 2026 03:37:12 +0000 (0:00:00.396) 0:01:06.379 ****** 2026-02-17 03:37:15.653604 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}) 2026-02-17 03:37:15.653607 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}) 2026-02-17 03:37:15.653611 | orchestrator | 2026-02-17 03:37:15.653615 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-17 03:37:15.653619 | orchestrator | Tuesday 17 February 2026 03:37:14 +0000 (0:00:01.359) 0:01:07.739 ****** 2026-02-17 03:37:15.653623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:15.653627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:15.653630 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653634 | orchestrator | 2026-02-17 03:37:15.653638 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-17 03:37:15.653642 | orchestrator | Tuesday 17 February 2026 03:37:14 +0000 (0:00:00.161) 0:01:07.900 ****** 2026-02-17 03:37:15.653646 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653649 | orchestrator | 2026-02-17 03:37:15.653653 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-17 03:37:15.653657 | orchestrator | Tuesday 17 February 2026 03:37:14 +0000 (0:00:00.171) 0:01:08.072 ****** 2026-02-17 03:37:15.653661 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:15.653665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:15.653671 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653675 | orchestrator | 2026-02-17 03:37:15.653679 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-17 03:37:15.653683 | orchestrator | Tuesday 17 February 2026 03:37:14 +0000 (0:00:00.173) 0:01:08.245 ****** 2026-02-17 03:37:15.653687 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653690 | orchestrator | 2026-02-17 03:37:15.653694 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-17 03:37:15.653698 | orchestrator | Tuesday 17 February 2026 03:37:14 +0000 (0:00:00.155) 0:01:08.401 ****** 2026-02-17 03:37:15.653702 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:15.653706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:15.653709 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653713 | orchestrator | 2026-02-17 03:37:15.653717 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-17 03:37:15.653721 | orchestrator | Tuesday 17 February 2026 03:37:14 +0000 (0:00:00.173) 0:01:08.574 ****** 2026-02-17 03:37:15.653725 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653729 | orchestrator | 2026-02-17 03:37:15.653732 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-17 03:37:15.653736 | orchestrator | Tuesday 17 February 2026 03:37:15 +0000 (0:00:00.157) 0:01:08.732 ****** 2026-02-17 03:37:15.653740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:15.653744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:15.653748 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:15.653753 | orchestrator | 2026-02-17 03:37:15.653759 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-17 03:37:15.653763 | orchestrator | Tuesday 17 February 2026 03:37:15 +0000 (0:00:00.166) 0:01:08.899 ****** 2026-02-17 03:37:15.653767 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:15.653771 | orchestrator | 2026-02-17 03:37:15.653774 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-17 03:37:15.653778 | orchestrator | Tuesday 17 February 2026 03:37:15 +0000 (0:00:00.159) 0:01:09.059 ****** 2026-02-17 03:37:15.653786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:22.721425 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:22.721667 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.721690 | orchestrator | 2026-02-17 03:37:22.721706 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-17 03:37:22.721722 | orchestrator | Tuesday 17 February 2026 03:37:15 +0000 (0:00:00.172) 0:01:09.231 ****** 2026-02-17 03:37:22.721757 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:22.721774 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:22.721788 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.721797 | orchestrator | 2026-02-17 03:37:22.721806 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-17 03:37:22.721815 | orchestrator | Tuesday 17 February 2026 03:37:15 +0000 (0:00:00.187) 0:01:09.419 ****** 2026-02-17 03:37:22.721849 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:22.721858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:22.721867 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.721876 | orchestrator | 2026-02-17 03:37:22.721885 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-17 03:37:22.721893 | orchestrator | Tuesday 17 February 2026 03:37:16 +0000 (0:00:00.454) 0:01:09.874 ****** 2026-02-17 03:37:22.721902 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.721913 | orchestrator | 2026-02-17 03:37:22.721923 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-17 03:37:22.721932 | orchestrator | Tuesday 17 February 2026 03:37:16 +0000 (0:00:00.169) 0:01:10.043 ****** 2026-02-17 03:37:22.721943 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.721953 | orchestrator | 2026-02-17 03:37:22.721963 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-17 03:37:22.721973 | orchestrator | Tuesday 17 February 2026 03:37:16 +0000 (0:00:00.166) 0:01:10.209 ****** 2026-02-17 03:37:22.721983 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.721992 | orchestrator | 2026-02-17 03:37:22.722002 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-17 03:37:22.722075 | orchestrator | Tuesday 17 February 2026 03:37:16 +0000 (0:00:00.154) 0:01:10.364 ****** 2026-02-17 03:37:22.722091 | orchestrator | ok: [testbed-node-5] => { 2026-02-17 03:37:22.722102 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-17 03:37:22.722118 | orchestrator | } 2026-02-17 03:37:22.722135 | orchestrator | 2026-02-17 03:37:22.722150 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-17 03:37:22.722164 | orchestrator | Tuesday 17 February 2026 03:37:16 +0000 (0:00:00.148) 0:01:10.512 ****** 2026-02-17 03:37:22.722180 | orchestrator | ok: [testbed-node-5] => { 2026-02-17 03:37:22.722195 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-17 03:37:22.722209 | orchestrator | } 2026-02-17 03:37:22.722222 | orchestrator | 2026-02-17 03:37:22.722231 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-17 03:37:22.722240 | orchestrator | Tuesday 17 February 2026 03:37:17 +0000 (0:00:00.205) 0:01:10.718 ****** 2026-02-17 03:37:22.722249 | orchestrator | ok: [testbed-node-5] => { 2026-02-17 03:37:22.722257 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-17 03:37:22.722266 | orchestrator | } 2026-02-17 03:37:22.722275 | orchestrator | 2026-02-17 03:37:22.722283 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-17 03:37:22.722292 | orchestrator | Tuesday 17 February 2026 03:37:17 +0000 (0:00:00.180) 0:01:10.898 ****** 2026-02-17 03:37:22.722301 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:22.722310 | orchestrator | 2026-02-17 03:37:22.722318 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-17 03:37:22.722327 | orchestrator | Tuesday 17 February 2026 03:37:17 +0000 (0:00:00.540) 0:01:11.439 ****** 2026-02-17 03:37:22.722336 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:22.722401 | orchestrator | 2026-02-17 03:37:22.722413 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-17 03:37:22.722422 | orchestrator | Tuesday 17 February 2026 03:37:18 +0000 (0:00:00.515) 0:01:11.954 ****** 2026-02-17 03:37:22.722431 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:22.722440 | orchestrator | 2026-02-17 03:37:22.722448 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-17 03:37:22.722457 | orchestrator | Tuesday 17 February 2026 03:37:18 +0000 (0:00:00.518) 0:01:12.473 ****** 2026-02-17 03:37:22.722465 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:22.722474 | orchestrator | 2026-02-17 03:37:22.722483 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-17 03:37:22.722501 | orchestrator | Tuesday 17 February 2026 03:37:19 +0000 (0:00:00.175) 0:01:12.649 ****** 2026-02-17 03:37:22.722510 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722519 | orchestrator | 2026-02-17 03:37:22.722527 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-17 03:37:22.722536 | orchestrator | Tuesday 17 February 2026 03:37:19 +0000 (0:00:00.124) 0:01:12.773 ****** 2026-02-17 03:37:22.722544 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722553 | orchestrator | 2026-02-17 03:37:22.722561 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-17 03:37:22.722570 | orchestrator | Tuesday 17 February 2026 03:37:19 +0000 (0:00:00.358) 0:01:13.132 ****** 2026-02-17 03:37:22.722579 | orchestrator | ok: [testbed-node-5] => { 2026-02-17 03:37:22.722587 | orchestrator |  "vgs_report": { 2026-02-17 03:37:22.722596 | orchestrator |  "vg": [] 2026-02-17 03:37:22.722626 | orchestrator |  } 2026-02-17 03:37:22.722635 | orchestrator | } 2026-02-17 03:37:22.722644 | orchestrator | 2026-02-17 03:37:22.722652 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-17 03:37:22.722661 | orchestrator | Tuesday 17 February 2026 03:37:19 +0000 (0:00:00.168) 0:01:13.300 ****** 2026-02-17 03:37:22.722670 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722679 | orchestrator | 2026-02-17 03:37:22.722687 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-17 03:37:22.722696 | orchestrator | Tuesday 17 February 2026 03:37:19 +0000 (0:00:00.142) 0:01:13.443 ****** 2026-02-17 03:37:22.722711 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722720 | orchestrator | 2026-02-17 03:37:22.722729 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-17 03:37:22.722738 | orchestrator | Tuesday 17 February 2026 03:37:20 +0000 (0:00:00.162) 0:01:13.605 ****** 2026-02-17 03:37:22.722746 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722755 | orchestrator | 2026-02-17 03:37:22.722763 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-17 03:37:22.722772 | orchestrator | Tuesday 17 February 2026 03:37:20 +0000 (0:00:00.167) 0:01:13.772 ****** 2026-02-17 03:37:22.722781 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722789 | orchestrator | 2026-02-17 03:37:22.722798 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-17 03:37:22.722807 | orchestrator | Tuesday 17 February 2026 03:37:20 +0000 (0:00:00.155) 0:01:13.928 ****** 2026-02-17 03:37:22.722815 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722824 | orchestrator | 2026-02-17 03:37:22.722832 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-17 03:37:22.722841 | orchestrator | Tuesday 17 February 2026 03:37:20 +0000 (0:00:00.143) 0:01:14.071 ****** 2026-02-17 03:37:22.722850 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722858 | orchestrator | 2026-02-17 03:37:22.722867 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-17 03:37:22.722875 | orchestrator | Tuesday 17 February 2026 03:37:20 +0000 (0:00:00.144) 0:01:14.216 ****** 2026-02-17 03:37:22.722884 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722893 | orchestrator | 2026-02-17 03:37:22.722901 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-17 03:37:22.722910 | orchestrator | Tuesday 17 February 2026 03:37:20 +0000 (0:00:00.153) 0:01:14.369 ****** 2026-02-17 03:37:22.722919 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722927 | orchestrator | 2026-02-17 03:37:22.722936 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-17 03:37:22.722944 | orchestrator | Tuesday 17 February 2026 03:37:20 +0000 (0:00:00.156) 0:01:14.526 ****** 2026-02-17 03:37:22.722953 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.722962 | orchestrator | 2026-02-17 03:37:22.722970 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-17 03:37:22.722979 | orchestrator | Tuesday 17 February 2026 03:37:21 +0000 (0:00:00.152) 0:01:14.678 ****** 2026-02-17 03:37:22.722994 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.723002 | orchestrator | 2026-02-17 03:37:22.723011 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-17 03:37:22.723020 | orchestrator | Tuesday 17 February 2026 03:37:21 +0000 (0:00:00.146) 0:01:14.825 ****** 2026-02-17 03:37:22.723028 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.723037 | orchestrator | 2026-02-17 03:37:22.723046 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-17 03:37:22.723054 | orchestrator | Tuesday 17 February 2026 03:37:21 +0000 (0:00:00.403) 0:01:15.228 ****** 2026-02-17 03:37:22.723063 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.723071 | orchestrator | 2026-02-17 03:37:22.723080 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-17 03:37:22.723089 | orchestrator | Tuesday 17 February 2026 03:37:21 +0000 (0:00:00.153) 0:01:15.382 ****** 2026-02-17 03:37:22.723097 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.723106 | orchestrator | 2026-02-17 03:37:22.723114 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-17 03:37:22.723123 | orchestrator | Tuesday 17 February 2026 03:37:21 +0000 (0:00:00.158) 0:01:15.541 ****** 2026-02-17 03:37:22.723132 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.723140 | orchestrator | 2026-02-17 03:37:22.723149 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-17 03:37:22.723158 | orchestrator | Tuesday 17 February 2026 03:37:22 +0000 (0:00:00.157) 0:01:15.698 ****** 2026-02-17 03:37:22.723166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:22.723175 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:22.723184 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.723193 | orchestrator | 2026-02-17 03:37:22.723201 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-17 03:37:22.723210 | orchestrator | Tuesday 17 February 2026 03:37:22 +0000 (0:00:00.205) 0:01:15.904 ****** 2026-02-17 03:37:22.723219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:22.723227 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:22.723236 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:22.723245 | orchestrator | 2026-02-17 03:37:22.723253 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-17 03:37:22.723262 | orchestrator | Tuesday 17 February 2026 03:37:22 +0000 (0:00:00.207) 0:01:16.112 ****** 2026-02-17 03:37:22.723278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.011929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.013210 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.013262 | orchestrator | 2026-02-17 03:37:26.013297 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-17 03:37:26.013310 | orchestrator | Tuesday 17 February 2026 03:37:22 +0000 (0:00:00.186) 0:01:16.298 ****** 2026-02-17 03:37:26.013322 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.013333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.013407 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.013421 | orchestrator | 2026-02-17 03:37:26.013432 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-17 03:37:26.013443 | orchestrator | Tuesday 17 February 2026 03:37:22 +0000 (0:00:00.175) 0:01:16.474 ****** 2026-02-17 03:37:26.013454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.013465 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.013476 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.013487 | orchestrator | 2026-02-17 03:37:26.013498 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-17 03:37:26.013509 | orchestrator | Tuesday 17 February 2026 03:37:23 +0000 (0:00:00.175) 0:01:16.649 ****** 2026-02-17 03:37:26.013520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.013531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.013541 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.013552 | orchestrator | 2026-02-17 03:37:26.013563 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-17 03:37:26.013574 | orchestrator | Tuesday 17 February 2026 03:37:23 +0000 (0:00:00.174) 0:01:16.824 ****** 2026-02-17 03:37:26.013585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.013596 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.013607 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.013617 | orchestrator | 2026-02-17 03:37:26.013628 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-17 03:37:26.013639 | orchestrator | Tuesday 17 February 2026 03:37:23 +0000 (0:00:00.173) 0:01:16.997 ****** 2026-02-17 03:37:26.013650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.013661 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.013672 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.013683 | orchestrator | 2026-02-17 03:37:26.013693 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-17 03:37:26.013704 | orchestrator | Tuesday 17 February 2026 03:37:23 +0000 (0:00:00.163) 0:01:17.160 ****** 2026-02-17 03:37:26.013715 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:26.013812 | orchestrator | 2026-02-17 03:37:26.013827 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-17 03:37:26.013838 | orchestrator | Tuesday 17 February 2026 03:37:24 +0000 (0:00:00.791) 0:01:17.952 ****** 2026-02-17 03:37:26.013849 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:26.013859 | orchestrator | 2026-02-17 03:37:26.013870 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-17 03:37:26.013881 | orchestrator | Tuesday 17 February 2026 03:37:24 +0000 (0:00:00.527) 0:01:18.480 ****** 2026-02-17 03:37:26.013892 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:26.013903 | orchestrator | 2026-02-17 03:37:26.013914 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-17 03:37:26.013925 | orchestrator | Tuesday 17 February 2026 03:37:25 +0000 (0:00:00.182) 0:01:18.662 ****** 2026-02-17 03:37:26.013945 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'vg_name': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}) 2026-02-17 03:37:26.013958 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'vg_name': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}) 2026-02-17 03:37:26.013969 | orchestrator | 2026-02-17 03:37:26.013980 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-17 03:37:26.013991 | orchestrator | Tuesday 17 February 2026 03:37:25 +0000 (0:00:00.179) 0:01:18.842 ****** 2026-02-17 03:37:26.014096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.014146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.014166 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.014185 | orchestrator | 2026-02-17 03:37:26.014205 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-17 03:37:26.014223 | orchestrator | Tuesday 17 February 2026 03:37:25 +0000 (0:00:00.170) 0:01:19.012 ****** 2026-02-17 03:37:26.014242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.014262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.014282 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.014383 | orchestrator | 2026-02-17 03:37:26.014399 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-17 03:37:26.014410 | orchestrator | Tuesday 17 February 2026 03:37:25 +0000 (0:00:00.171) 0:01:19.184 ****** 2026-02-17 03:37:26.014421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 03:37:26.014432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 03:37:26.014443 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:26.014454 | orchestrator | 2026-02-17 03:37:26.014464 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-17 03:37:26.014475 | orchestrator | Tuesday 17 February 2026 03:37:25 +0000 (0:00:00.197) 0:01:19.381 ****** 2026-02-17 03:37:26.014486 | orchestrator | ok: [testbed-node-5] => { 2026-02-17 03:37:26.014497 | orchestrator |  "lvm_report": { 2026-02-17 03:37:26.014507 | orchestrator |  "lv": [ 2026-02-17 03:37:26.014518 | orchestrator |  { 2026-02-17 03:37:26.014530 | orchestrator |  "lv_name": "osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee", 2026-02-17 03:37:26.014541 | orchestrator |  "vg_name": "ceph-415e7a1a-a305-5338-824f-e9750ca5ebee" 2026-02-17 03:37:26.014552 | orchestrator |  }, 2026-02-17 03:37:26.014563 | orchestrator |  { 2026-02-17 03:37:26.014573 | orchestrator |  "lv_name": "osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841", 2026-02-17 03:37:26.014584 | orchestrator |  "vg_name": "ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841" 2026-02-17 03:37:26.014595 | orchestrator |  } 2026-02-17 03:37:26.014606 | orchestrator |  ], 2026-02-17 03:37:26.014616 | orchestrator |  "pv": [ 2026-02-17 03:37:26.014627 | orchestrator |  { 2026-02-17 03:37:26.014671 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-17 03:37:26.014683 | orchestrator |  "vg_name": "ceph-415e7a1a-a305-5338-824f-e9750ca5ebee" 2026-02-17 03:37:26.014695 | orchestrator |  }, 2026-02-17 03:37:26.014705 | orchestrator |  { 2026-02-17 03:37:26.014716 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-17 03:37:26.014742 | orchestrator |  "vg_name": "ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841" 2026-02-17 03:37:26.014753 | orchestrator |  } 2026-02-17 03:37:26.014764 | orchestrator |  ] 2026-02-17 03:37:26.014774 | orchestrator |  } 2026-02-17 03:37:26.014786 | orchestrator | } 2026-02-17 03:37:26.014797 | orchestrator | 2026-02-17 03:37:26.014808 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:37:26.014819 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-17 03:37:26.014830 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-17 03:37:26.014842 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-17 03:37:26.014852 | orchestrator | 2026-02-17 03:37:26.014864 | orchestrator | 2026-02-17 03:37:26.014874 | orchestrator | 2026-02-17 03:37:26.014885 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:37:26.014896 | orchestrator | Tuesday 17 February 2026 03:37:25 +0000 (0:00:00.180) 0:01:19.562 ****** 2026-02-17 03:37:26.014907 | orchestrator | =============================================================================== 2026-02-17 03:37:26.014918 | orchestrator | Create block VGs -------------------------------------------------------- 5.70s 2026-02-17 03:37:26.014928 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2026-02-17 03:37:26.014939 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.90s 2026-02-17 03:37:26.014950 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-02-17 03:37:26.014961 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2026-02-17 03:37:26.014972 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-02-17 03:37:26.014983 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.56s 2026-02-17 03:37:26.014994 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2026-02-17 03:37:26.015016 | orchestrator | Add known partitions to the list of available block devices ------------- 1.42s 2026-02-17 03:37:26.460609 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.35s 2026-02-17 03:37:26.460707 | orchestrator | Print LVM report data --------------------------------------------------- 1.04s 2026-02-17 03:37:26.460720 | orchestrator | Add known links to the list of available block devices ------------------ 1.01s 2026-02-17 03:37:26.460749 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.96s 2026-02-17 03:37:26.460760 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.95s 2026-02-17 03:37:26.460770 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-02-17 03:37:26.460780 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.80s 2026-02-17 03:37:26.460789 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-02-17 03:37:26.460799 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-02-17 03:37:26.460809 | orchestrator | Count OSDs put on ceph_wal_devices defined in lvm_volumes --------------- 0.78s 2026-02-17 03:37:26.460818 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.76s 2026-02-17 03:37:39.063539 | orchestrator | 2026-02-17 03:37:39 | INFO  | Task 86b42aa5-d828-435e-a79c-5a191c98de63 (facts) was prepared for execution. 2026-02-17 03:37:39.063715 | orchestrator | 2026-02-17 03:37:39 | INFO  | It takes a moment until task 86b42aa5-d828-435e-a79c-5a191c98de63 (facts) has been started and output is visible here. 2026-02-17 03:37:52.666452 | orchestrator | 2026-02-17 03:37:52.666598 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-17 03:37:52.666649 | orchestrator | 2026-02-17 03:37:52.666661 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-17 03:37:52.666695 | orchestrator | Tuesday 17 February 2026 03:37:43 +0000 (0:00:00.289) 0:00:00.289 ****** 2026-02-17 03:37:52.666708 | orchestrator | ok: [testbed-manager] 2026-02-17 03:37:52.666733 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:37:52.666745 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:37:52.666755 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:37:52.666766 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:37:52.666777 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:37:52.666788 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:52.666799 | orchestrator | 2026-02-17 03:37:52.666810 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-17 03:37:52.666821 | orchestrator | Tuesday 17 February 2026 03:37:44 +0000 (0:00:01.213) 0:00:01.502 ****** 2026-02-17 03:37:52.666833 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:37:52.666845 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:37:52.666856 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:37:52.666869 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:37:52.666881 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:37:52.666893 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:37:52.666905 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:52.666918 | orchestrator | 2026-02-17 03:37:52.666930 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-17 03:37:52.666942 | orchestrator | 2026-02-17 03:37:52.666953 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-17 03:37:52.666964 | orchestrator | Tuesday 17 February 2026 03:37:46 +0000 (0:00:01.383) 0:00:02.886 ****** 2026-02-17 03:37:52.666975 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:37:52.666986 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:37:52.666997 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:37:52.667008 | orchestrator | ok: [testbed-manager] 2026-02-17 03:37:52.667019 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:37:52.667029 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:37:52.667040 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:37:52.667051 | orchestrator | 2026-02-17 03:37:52.667062 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-17 03:37:52.667073 | orchestrator | 2026-02-17 03:37:52.667085 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-17 03:37:52.667096 | orchestrator | Tuesday 17 February 2026 03:37:51 +0000 (0:00:05.309) 0:00:08.195 ****** 2026-02-17 03:37:52.667107 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:37:52.667118 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:37:52.667129 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:37:52.667140 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:37:52.667151 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:37:52.667161 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:37:52.667172 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:37:52.667183 | orchestrator | 2026-02-17 03:37:52.667194 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:37:52.667206 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:37:52.667218 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:37:52.667229 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:37:52.667240 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:37:52.667251 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:37:52.667271 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:37:52.667282 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:37:52.667292 | orchestrator | 2026-02-17 03:37:52.667304 | orchestrator | 2026-02-17 03:37:52.667315 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:37:52.667344 | orchestrator | Tuesday 17 February 2026 03:37:52 +0000 (0:00:00.591) 0:00:08.786 ****** 2026-02-17 03:37:52.667356 | orchestrator | =============================================================================== 2026-02-17 03:37:52.667457 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.31s 2026-02-17 03:37:52.667476 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2026-02-17 03:37:52.667488 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.21s 2026-02-17 03:37:52.667499 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-02-17 03:37:55.305861 | orchestrator | 2026-02-17 03:37:55 | INFO  | Task 76bdae67-bfc0-4757-b4e1-b928e4c38fe3 (ceph) was prepared for execution. 2026-02-17 03:37:55.305937 | orchestrator | 2026-02-17 03:37:55 | INFO  | It takes a moment until task 76bdae67-bfc0-4757-b4e1-b928e4c38fe3 (ceph) has been started and output is visible here. 2026-02-17 03:38:14.528342 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-17 03:38:14.528583 | orchestrator | 2.16.14 2026-02-17 03:38:14.528651 | orchestrator | 2026-02-17 03:38:14.528673 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-17 03:38:14.528693 | orchestrator | 2026-02-17 03:38:14.528711 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 03:38:14.528729 | orchestrator | Tuesday 17 February 2026 03:38:00 +0000 (0:00:00.869) 0:00:00.869 ****** 2026-02-17 03:38:14.528749 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:38:14.528768 | orchestrator | 2026-02-17 03:38:14.528779 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 03:38:14.528790 | orchestrator | Tuesday 17 February 2026 03:38:02 +0000 (0:00:01.255) 0:00:02.125 ****** 2026-02-17 03:38:14.528801 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:14.528812 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:14.528823 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:14.528836 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:14.528848 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:14.528860 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:14.528874 | orchestrator | 2026-02-17 03:38:14.528887 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 03:38:14.528899 | orchestrator | Tuesday 17 February 2026 03:38:03 +0000 (0:00:01.321) 0:00:03.446 ****** 2026-02-17 03:38:14.528911 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:14.528924 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:14.528936 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:14.528947 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:14.528959 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:14.528972 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:14.528986 | orchestrator | 2026-02-17 03:38:14.529004 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 03:38:14.529024 | orchestrator | Tuesday 17 February 2026 03:38:04 +0000 (0:00:00.862) 0:00:04.309 ****** 2026-02-17 03:38:14.529042 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:14.529061 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:14.529073 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:14.529084 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:14.529128 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:14.529140 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:14.529150 | orchestrator | 2026-02-17 03:38:14.529161 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 03:38:14.529172 | orchestrator | Tuesday 17 February 2026 03:38:05 +0000 (0:00:00.971) 0:00:05.280 ****** 2026-02-17 03:38:14.529182 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:14.529193 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:14.529204 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:14.529214 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:14.529225 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:14.529236 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:14.529246 | orchestrator | 2026-02-17 03:38:14.529257 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 03:38:14.529268 | orchestrator | Tuesday 17 February 2026 03:38:06 +0000 (0:00:00.843) 0:00:06.123 ****** 2026-02-17 03:38:14.529281 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:14.529299 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:14.529317 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:14.529334 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:14.529345 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:14.529356 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:14.529366 | orchestrator | 2026-02-17 03:38:14.529413 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 03:38:14.529431 | orchestrator | Tuesday 17 February 2026 03:38:06 +0000 (0:00:00.667) 0:00:06.791 ****** 2026-02-17 03:38:14.529451 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:14.529469 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:14.529487 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:14.529505 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:14.529517 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:14.529527 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:14.529538 | orchestrator | 2026-02-17 03:38:14.529549 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 03:38:14.529560 | orchestrator | Tuesday 17 February 2026 03:38:07 +0000 (0:00:00.874) 0:00:07.666 ****** 2026-02-17 03:38:14.529571 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:14.529583 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:14.529594 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:14.529605 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:14.529616 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:14.529626 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:14.529637 | orchestrator | 2026-02-17 03:38:14.529648 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 03:38:14.529660 | orchestrator | Tuesday 17 February 2026 03:38:08 +0000 (0:00:00.666) 0:00:08.333 ****** 2026-02-17 03:38:14.529670 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:14.529681 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:14.529692 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:14.529703 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:14.529730 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:14.529742 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:14.529752 | orchestrator | 2026-02-17 03:38:14.529763 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 03:38:14.529774 | orchestrator | Tuesday 17 February 2026 03:38:09 +0000 (0:00:00.824) 0:00:09.158 ****** 2026-02-17 03:38:14.529785 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:38:14.529796 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:38:14.529807 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:38:14.529817 | orchestrator | 2026-02-17 03:38:14.529828 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 03:38:14.529839 | orchestrator | Tuesday 17 February 2026 03:38:09 +0000 (0:00:00.663) 0:00:09.821 ****** 2026-02-17 03:38:14.529864 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:14.529875 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:14.529885 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:14.529919 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:14.529930 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:14.529941 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:14.529952 | orchestrator | 2026-02-17 03:38:14.529962 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 03:38:14.529973 | orchestrator | Tuesday 17 February 2026 03:38:10 +0000 (0:00:00.843) 0:00:10.665 ****** 2026-02-17 03:38:14.529984 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:38:14.529995 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:38:14.530006 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:38:14.530072 | orchestrator | 2026-02-17 03:38:14.530085 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 03:38:14.530097 | orchestrator | Tuesday 17 February 2026 03:38:13 +0000 (0:00:02.405) 0:00:13.071 ****** 2026-02-17 03:38:14.530107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 03:38:14.530119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 03:38:14.530130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 03:38:14.530141 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:14.530152 | orchestrator | 2026-02-17 03:38:14.530163 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 03:38:14.530174 | orchestrator | Tuesday 17 February 2026 03:38:13 +0000 (0:00:00.442) 0:00:13.514 ****** 2026-02-17 03:38:14.530231 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 03:38:14.530246 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 03:38:14.530258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 03:38:14.530269 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:14.530280 | orchestrator | 2026-02-17 03:38:14.530291 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 03:38:14.530302 | orchestrator | Tuesday 17 February 2026 03:38:14 +0000 (0:00:00.644) 0:00:14.158 ****** 2026-02-17 03:38:14.530314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:14.530335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:14.530354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:14.530400 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:14.530413 | orchestrator | 2026-02-17 03:38:14.530431 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 03:38:14.530442 | orchestrator | Tuesday 17 February 2026 03:38:14 +0000 (0:00:00.183) 0:00:14.342 ****** 2026-02-17 03:38:14.530482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 03:38:11.519905', 'end': '2026-02-17 03:38:11.563929', 'delta': '0:00:00.044024', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 03:38:24.730063 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 03:38:12.109749', 'end': '2026-02-17 03:38:12.158844', 'delta': '0:00:00.049095', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 03:38:24.730164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 03:38:12.638120', 'end': '2026-02-17 03:38:12.678106', 'delta': '0:00:00.039986', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 03:38:24.730176 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730187 | orchestrator | 2026-02-17 03:38:24.730195 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 03:38:24.730204 | orchestrator | Tuesday 17 February 2026 03:38:14 +0000 (0:00:00.211) 0:00:14.553 ****** 2026-02-17 03:38:24.730212 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:24.730220 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:24.730227 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:24.730235 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:24.730242 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:24.730249 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:24.730256 | orchestrator | 2026-02-17 03:38:24.730264 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 03:38:24.730271 | orchestrator | Tuesday 17 February 2026 03:38:15 +0000 (0:00:00.777) 0:00:15.330 ****** 2026-02-17 03:38:24.730279 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:38:24.730286 | orchestrator | 2026-02-17 03:38:24.730294 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 03:38:24.730301 | orchestrator | Tuesday 17 February 2026 03:38:16 +0000 (0:00:01.111) 0:00:16.442 ****** 2026-02-17 03:38:24.730329 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730337 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.730345 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.730352 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.730359 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.730366 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.730373 | orchestrator | 2026-02-17 03:38:24.730421 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 03:38:24.730429 | orchestrator | Tuesday 17 February 2026 03:38:17 +0000 (0:00:00.635) 0:00:17.077 ****** 2026-02-17 03:38:24.730437 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730444 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.730451 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.730459 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.730466 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.730473 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.730480 | orchestrator | 2026-02-17 03:38:24.730488 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 03:38:24.730495 | orchestrator | Tuesday 17 February 2026 03:38:18 +0000 (0:00:01.222) 0:00:18.299 ****** 2026-02-17 03:38:24.730502 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730509 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.730517 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.730524 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.730531 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.730550 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.730559 | orchestrator | 2026-02-17 03:38:24.730567 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 03:38:24.730576 | orchestrator | Tuesday 17 February 2026 03:38:18 +0000 (0:00:00.646) 0:00:18.945 ****** 2026-02-17 03:38:24.730584 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730592 | orchestrator | 2026-02-17 03:38:24.730601 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 03:38:24.730609 | orchestrator | Tuesday 17 February 2026 03:38:19 +0000 (0:00:00.119) 0:00:19.065 ****** 2026-02-17 03:38:24.730617 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730626 | orchestrator | 2026-02-17 03:38:24.730634 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 03:38:24.730642 | orchestrator | Tuesday 17 February 2026 03:38:19 +0000 (0:00:00.215) 0:00:19.280 ****** 2026-02-17 03:38:24.730650 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730658 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.730666 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.730674 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.730683 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.730691 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.730700 | orchestrator | 2026-02-17 03:38:24.730723 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 03:38:24.730732 | orchestrator | Tuesday 17 February 2026 03:38:20 +0000 (0:00:00.817) 0:00:20.098 ****** 2026-02-17 03:38:24.730741 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730749 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.730757 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.730765 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.730773 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.730782 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.730790 | orchestrator | 2026-02-17 03:38:24.730798 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 03:38:24.730806 | orchestrator | Tuesday 17 February 2026 03:38:20 +0000 (0:00:00.628) 0:00:20.726 ****** 2026-02-17 03:38:24.730814 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730823 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.730831 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.730845 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.730854 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.730862 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.730870 | orchestrator | 2026-02-17 03:38:24.730878 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 03:38:24.730886 | orchestrator | Tuesday 17 February 2026 03:38:21 +0000 (0:00:00.878) 0:00:21.605 ****** 2026-02-17 03:38:24.730895 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730903 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.730912 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.730920 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.730928 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.730935 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.730942 | orchestrator | 2026-02-17 03:38:24.730950 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 03:38:24.730957 | orchestrator | Tuesday 17 February 2026 03:38:22 +0000 (0:00:00.642) 0:00:22.247 ****** 2026-02-17 03:38:24.730964 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.730972 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.730979 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.730986 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.730993 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.731001 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.731008 | orchestrator | 2026-02-17 03:38:24.731015 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 03:38:24.731023 | orchestrator | Tuesday 17 February 2026 03:38:23 +0000 (0:00:00.841) 0:00:23.089 ****** 2026-02-17 03:38:24.731030 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.731037 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.731044 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.731052 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.731059 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.731066 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.731073 | orchestrator | 2026-02-17 03:38:24.731081 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 03:38:24.731089 | orchestrator | Tuesday 17 February 2026 03:38:23 +0000 (0:00:00.611) 0:00:23.701 ****** 2026-02-17 03:38:24.731097 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:24.731104 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:24.731111 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:24.731119 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:24.731126 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:24.731133 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:24.731140 | orchestrator | 2026-02-17 03:38:24.731148 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 03:38:24.731155 | orchestrator | Tuesday 17 February 2026 03:38:24 +0000 (0:00:00.906) 0:00:24.607 ****** 2026-02-17 03:38:24.731163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.731178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.731197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.846723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.846869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.846885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.846897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.846908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.846960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.846973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.847027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:24.847065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:24.847077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:24.847089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:24.847114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:24.847144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.059327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059599 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:25.059612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.059651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.059667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.059694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.059715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.192789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.192888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.192905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.192917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.192953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.192985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.192996 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:25.193008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.193019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.193048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.193059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.193069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.193088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.193108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.193126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.424005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.424135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.424156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.424198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.424226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.424239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.424252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.424264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.424298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.424316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.424354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.424426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.424447 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:25.424468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.424566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.664708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.664801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.664835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.664846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.664855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.664878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.664933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.664954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:25.664966 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:25.664976 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:25.664986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.664995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.665009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.665018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.665027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.665036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:25.665053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:26.162009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:38:26.162182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:26.162197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:38:26.162205 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:26.162213 | orchestrator | 2026-02-17 03:38:26.162221 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 03:38:26.162228 | orchestrator | Tuesday 17 February 2026 03:38:25 +0000 (0:00:01.082) 0:00:25.689 ****** 2026-02-17 03:38:26.162250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.162263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.162270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.162280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.162296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.162307 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.162319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.162351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.229932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.230109 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.230161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.230216 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.230270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.230301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.230322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.230341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.230415 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.244704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.530952 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:26.531045 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531076 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531087 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531116 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531143 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531153 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531187 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531202 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531211 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:26.531221 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.531237 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636483 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636518 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636605 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636628 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636651 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.636722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805182 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805195 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805239 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805260 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805281 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:26.805295 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805307 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805319 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:26.805338 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077180 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077295 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077327 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077342 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077432 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077472 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077490 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:27.077507 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:27.077522 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077538 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077553 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077568 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:27.077594 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:34.148881 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:34.148988 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:34.149008 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:34.149048 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:34.149150 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:38:34.149166 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:34.149176 | orchestrator | 2026-02-17 03:38:34.149185 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 03:38:34.149194 | orchestrator | Tuesday 17 February 2026 03:38:27 +0000 (0:00:01.419) 0:00:27.109 ****** 2026-02-17 03:38:34.149202 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:34.149211 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:34.149219 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:34.149227 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:34.149234 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:34.149242 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:34.149250 | orchestrator | 2026-02-17 03:38:34.149258 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 03:38:34.149266 | orchestrator | Tuesday 17 February 2026 03:38:28 +0000 (0:00:01.016) 0:00:28.126 ****** 2026-02-17 03:38:34.149274 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:34.149282 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:34.149292 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:34.149306 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:34.149319 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:34.149332 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:34.149345 | orchestrator | 2026-02-17 03:38:34.149359 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 03:38:34.149367 | orchestrator | Tuesday 17 February 2026 03:38:28 +0000 (0:00:00.875) 0:00:29.002 ****** 2026-02-17 03:38:34.149375 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:34.149409 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:34.149420 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:34.149429 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:34.149438 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:34.149447 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:34.149456 | orchestrator | 2026-02-17 03:38:34.149465 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 03:38:34.149475 | orchestrator | Tuesday 17 February 2026 03:38:29 +0000 (0:00:00.779) 0:00:29.781 ****** 2026-02-17 03:38:34.149484 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:34.149497 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:34.149511 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:34.149525 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:34.149539 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:34.149552 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:34.149566 | orchestrator | 2026-02-17 03:38:34.149580 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 03:38:34.149594 | orchestrator | Tuesday 17 February 2026 03:38:30 +0000 (0:00:00.887) 0:00:30.669 ****** 2026-02-17 03:38:34.149605 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:34.149614 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:34.149623 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:34.149641 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:34.149649 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:34.149658 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:34.149667 | orchestrator | 2026-02-17 03:38:34.149676 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 03:38:34.149686 | orchestrator | Tuesday 17 February 2026 03:38:31 +0000 (0:00:00.668) 0:00:31.337 ****** 2026-02-17 03:38:34.149694 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:34.149704 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:34.149713 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:34.149723 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:34.149731 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:34.149740 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:34.149749 | orchestrator | 2026-02-17 03:38:34.149758 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 03:38:34.149767 | orchestrator | Tuesday 17 February 2026 03:38:32 +0000 (0:00:00.901) 0:00:32.238 ****** 2026-02-17 03:38:34.149775 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-17 03:38:34.149784 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-17 03:38:34.149792 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-17 03:38:34.149800 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-17 03:38:34.149808 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-17 03:38:34.149815 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 03:38:34.149823 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-17 03:38:34.149831 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-17 03:38:34.149839 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-17 03:38:34.149847 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-17 03:38:34.149855 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-17 03:38:34.149862 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-17 03:38:34.149871 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 03:38:34.149879 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-17 03:38:34.149896 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 03:38:48.529102 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 03:38:48.529215 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 03:38:48.529248 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-17 03:38:48.529261 | orchestrator | 2026-02-17 03:38:48.529273 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 03:38:48.529286 | orchestrator | Tuesday 17 February 2026 03:38:34 +0000 (0:00:01.929) 0:00:34.168 ****** 2026-02-17 03:38:48.529297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 03:38:48.529309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 03:38:48.529320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 03:38:48.529331 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:48.529343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 03:38:48.529353 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 03:38:48.529364 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 03:38:48.529375 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:48.529385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 03:38:48.529467 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 03:38:48.529478 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 03:38:48.529489 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:48.529500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 03:38:48.529517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 03:38:48.529568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 03:38:48.529587 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:48.529605 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 03:38:48.529621 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 03:38:48.529638 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 03:38:48.529655 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:48.529672 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 03:38:48.529688 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 03:38:48.529706 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 03:38:48.529723 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:48.529742 | orchestrator | 2026-02-17 03:38:48.529759 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 03:38:48.529777 | orchestrator | Tuesday 17 February 2026 03:38:34 +0000 (0:00:00.765) 0:00:34.933 ****** 2026-02-17 03:38:48.529795 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:48.529813 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:48.529832 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:48.529852 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:38:48.529870 | orchestrator | 2026-02-17 03:38:48.529890 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 03:38:48.529903 | orchestrator | Tuesday 17 February 2026 03:38:35 +0000 (0:00:01.098) 0:00:36.032 ****** 2026-02-17 03:38:48.529914 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:48.529926 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:48.529937 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:48.529947 | orchestrator | 2026-02-17 03:38:48.529958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 03:38:48.529969 | orchestrator | Tuesday 17 February 2026 03:38:36 +0000 (0:00:00.342) 0:00:36.374 ****** 2026-02-17 03:38:48.529980 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:48.529991 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:48.530002 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:48.530012 | orchestrator | 2026-02-17 03:38:48.530083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 03:38:48.530095 | orchestrator | Tuesday 17 February 2026 03:38:36 +0000 (0:00:00.347) 0:00:36.722 ****** 2026-02-17 03:38:48.530106 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:48.530117 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:48.530127 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:48.530138 | orchestrator | 2026-02-17 03:38:48.530149 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 03:38:48.530160 | orchestrator | Tuesday 17 February 2026 03:38:37 +0000 (0:00:00.603) 0:00:37.325 ****** 2026-02-17 03:38:48.530171 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:48.530182 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:48.530192 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:48.530203 | orchestrator | 2026-02-17 03:38:48.530214 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 03:38:48.530225 | orchestrator | Tuesday 17 February 2026 03:38:37 +0000 (0:00:00.460) 0:00:37.785 ****** 2026-02-17 03:38:48.530236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:38:48.530247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:38:48.530258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:38:48.530268 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:48.530279 | orchestrator | 2026-02-17 03:38:48.530290 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 03:38:48.530313 | orchestrator | Tuesday 17 February 2026 03:38:38 +0000 (0:00:00.409) 0:00:38.195 ****** 2026-02-17 03:38:48.530324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:38:48.530335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:38:48.530346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:38:48.530357 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:48.530367 | orchestrator | 2026-02-17 03:38:48.530426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 03:38:48.530448 | orchestrator | Tuesday 17 February 2026 03:38:38 +0000 (0:00:00.393) 0:00:38.588 ****** 2026-02-17 03:38:48.530474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:38:48.530490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:38:48.530501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:38:48.530512 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:48.530523 | orchestrator | 2026-02-17 03:38:48.530533 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 03:38:48.530544 | orchestrator | Tuesday 17 February 2026 03:38:38 +0000 (0:00:00.392) 0:00:38.980 ****** 2026-02-17 03:38:48.530555 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:48.530566 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:48.530577 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:48.530588 | orchestrator | 2026-02-17 03:38:48.530598 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 03:38:48.530609 | orchestrator | Tuesday 17 February 2026 03:38:39 +0000 (0:00:00.369) 0:00:39.350 ****** 2026-02-17 03:38:48.530620 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 03:38:48.530631 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 03:38:48.530642 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 03:38:48.530653 | orchestrator | 2026-02-17 03:38:48.530664 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 03:38:48.530675 | orchestrator | Tuesday 17 February 2026 03:38:40 +0000 (0:00:01.114) 0:00:40.464 ****** 2026-02-17 03:38:48.530686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:38:48.530698 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:38:48.530709 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:38:48.530720 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 03:38:48.530731 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 03:38:48.530741 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 03:38:48.530752 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 03:38:48.530763 | orchestrator | 2026-02-17 03:38:48.530774 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 03:38:48.530785 | orchestrator | Tuesday 17 February 2026 03:38:41 +0000 (0:00:00.826) 0:00:41.290 ****** 2026-02-17 03:38:48.530795 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:38:48.530806 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:38:48.530818 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:38:48.530837 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 03:38:48.530856 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 03:38:48.530874 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 03:38:48.530891 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 03:38:48.530908 | orchestrator | 2026-02-17 03:38:48.530928 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 03:38:48.530954 | orchestrator | Tuesday 17 February 2026 03:38:43 +0000 (0:00:01.991) 0:00:43.282 ****** 2026-02-17 03:38:48.530975 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:38:48.530995 | orchestrator | 2026-02-17 03:38:48.531014 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 03:38:48.531032 | orchestrator | Tuesday 17 February 2026 03:38:44 +0000 (0:00:01.317) 0:00:44.599 ****** 2026-02-17 03:38:48.531046 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:38:48.531057 | orchestrator | 2026-02-17 03:38:48.531068 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 03:38:48.531078 | orchestrator | Tuesday 17 February 2026 03:38:45 +0000 (0:00:01.270) 0:00:45.869 ****** 2026-02-17 03:38:48.531089 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:38:48.531100 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:38:48.531111 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:38:48.531121 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:38:48.531132 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:38:48.531143 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:38:48.531153 | orchestrator | 2026-02-17 03:38:48.531164 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 03:38:48.531175 | orchestrator | Tuesday 17 February 2026 03:38:47 +0000 (0:00:01.207) 0:00:47.077 ****** 2026-02-17 03:38:48.531185 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:38:48.531196 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:48.531207 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:38:48.531217 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:48.531228 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:38:48.531239 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:38:48.531249 | orchestrator | 2026-02-17 03:38:48.531260 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 03:38:48.531271 | orchestrator | Tuesday 17 February 2026 03:38:47 +0000 (0:00:00.719) 0:00:47.797 ****** 2026-02-17 03:38:48.531282 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:38:48.531292 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:38:48.531312 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.499895 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.499979 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.499998 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500004 | orchestrator | 2026-02-17 03:39:11.500009 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 03:39:11.500023 | orchestrator | Tuesday 17 February 2026 03:38:48 +0000 (0:00:00.964) 0:00:48.762 ****** 2026-02-17 03:39:11.500028 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500033 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:39:11.500037 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500042 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:39:11.500046 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500050 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500055 | orchestrator | 2026-02-17 03:39:11.500059 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 03:39:11.500063 | orchestrator | Tuesday 17 February 2026 03:38:49 +0000 (0:00:00.717) 0:00:49.480 ****** 2026-02-17 03:39:11.500068 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.500075 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.500084 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.500095 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:39:11.500101 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:39:11.500108 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:39:11.500115 | orchestrator | 2026-02-17 03:39:11.500122 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 03:39:11.500150 | orchestrator | Tuesday 17 February 2026 03:38:50 +0000 (0:00:01.232) 0:00:50.712 ****** 2026-02-17 03:39:11.500157 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.500164 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.500171 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.500178 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500185 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500192 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500199 | orchestrator | 2026-02-17 03:39:11.500206 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 03:39:11.500213 | orchestrator | Tuesday 17 February 2026 03:38:51 +0000 (0:00:00.670) 0:00:51.383 ****** 2026-02-17 03:39:11.500220 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.500224 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.500228 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.500233 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500237 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500241 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500245 | orchestrator | 2026-02-17 03:39:11.500250 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 03:39:11.500254 | orchestrator | Tuesday 17 February 2026 03:38:52 +0000 (0:00:00.958) 0:00:52.341 ****** 2026-02-17 03:39:11.500258 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:39:11.500263 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:39:11.500269 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500276 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:39:11.500288 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:39:11.500295 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:39:11.500301 | orchestrator | 2026-02-17 03:39:11.500308 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 03:39:11.500314 | orchestrator | Tuesday 17 February 2026 03:38:53 +0000 (0:00:01.092) 0:00:53.434 ****** 2026-02-17 03:39:11.500322 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:39:11.500328 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:39:11.500334 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500340 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:39:11.500347 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:39:11.500353 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:39:11.500360 | orchestrator | 2026-02-17 03:39:11.500367 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 03:39:11.500373 | orchestrator | Tuesday 17 February 2026 03:38:54 +0000 (0:00:01.397) 0:00:54.832 ****** 2026-02-17 03:39:11.500379 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.500386 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.500392 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.500439 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500456 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500464 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500472 | orchestrator | 2026-02-17 03:39:11.500478 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 03:39:11.500483 | orchestrator | Tuesday 17 February 2026 03:38:55 +0000 (0:00:00.630) 0:00:55.462 ****** 2026-02-17 03:39:11.500488 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.500493 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.500498 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.500503 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:39:11.500508 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:39:11.500513 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:39:11.500517 | orchestrator | 2026-02-17 03:39:11.500522 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 03:39:11.500527 | orchestrator | Tuesday 17 February 2026 03:38:56 +0000 (0:00:00.905) 0:00:56.367 ****** 2026-02-17 03:39:11.500532 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:39:11.500545 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:39:11.500551 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500555 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500560 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500565 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500570 | orchestrator | 2026-02-17 03:39:11.500575 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 03:39:11.500580 | orchestrator | Tuesday 17 February 2026 03:38:57 +0000 (0:00:00.674) 0:00:57.041 ****** 2026-02-17 03:39:11.500584 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:39:11.500589 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:39:11.500594 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500599 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500604 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500608 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500613 | orchestrator | 2026-02-17 03:39:11.500618 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 03:39:11.500623 | orchestrator | Tuesday 17 February 2026 03:38:57 +0000 (0:00:00.918) 0:00:57.960 ****** 2026-02-17 03:39:11.500627 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:39:11.500632 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:39:11.500652 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500657 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500662 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500674 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500681 | orchestrator | 2026-02-17 03:39:11.500688 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 03:39:11.500694 | orchestrator | Tuesday 17 February 2026 03:38:58 +0000 (0:00:00.638) 0:00:58.598 ****** 2026-02-17 03:39:11.500701 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.500708 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.500715 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.500721 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500728 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500736 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500740 | orchestrator | 2026-02-17 03:39:11.500745 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 03:39:11.500749 | orchestrator | Tuesday 17 February 2026 03:38:59 +0000 (0:00:00.881) 0:00:59.480 ****** 2026-02-17 03:39:11.500753 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.500757 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.500762 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.500766 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.500770 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.500774 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.500779 | orchestrator | 2026-02-17 03:39:11.500783 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 03:39:11.500787 | orchestrator | Tuesday 17 February 2026 03:39:00 +0000 (0:00:00.877) 0:01:00.357 ****** 2026-02-17 03:39:11.500792 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.500796 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.500800 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.500804 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:39:11.500809 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:39:11.500813 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:39:11.500817 | orchestrator | 2026-02-17 03:39:11.500822 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 03:39:11.500826 | orchestrator | Tuesday 17 February 2026 03:39:00 +0000 (0:00:00.678) 0:01:01.035 ****** 2026-02-17 03:39:11.500830 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:39:11.500834 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:39:11.500839 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500843 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:39:11.500847 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:39:11.500852 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:39:11.500861 | orchestrator | 2026-02-17 03:39:11.500865 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 03:39:11.500869 | orchestrator | Tuesday 17 February 2026 03:39:01 +0000 (0:00:00.922) 0:01:01.958 ****** 2026-02-17 03:39:11.500874 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:39:11.500878 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:39:11.500882 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:39:11.500886 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:39:11.500891 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:39:11.500895 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:39:11.500899 | orchestrator | 2026-02-17 03:39:11.500903 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 03:39:11.500908 | orchestrator | Tuesday 17 February 2026 03:39:03 +0000 (0:00:01.358) 0:01:03.316 ****** 2026-02-17 03:39:11.500912 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:39:11.500917 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:39:11.500921 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:39:11.500925 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:39:11.500929 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:39:11.500933 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:39:11.500938 | orchestrator | 2026-02-17 03:39:11.500942 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 03:39:11.500946 | orchestrator | Tuesday 17 February 2026 03:39:04 +0000 (0:00:01.496) 0:01:04.813 ****** 2026-02-17 03:39:11.500951 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:39:11.500955 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:39:11.500959 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:39:11.500963 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:39:11.500967 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:39:11.500972 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:39:11.500976 | orchestrator | 2026-02-17 03:39:11.500980 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 03:39:11.500984 | orchestrator | Tuesday 17 February 2026 03:39:07 +0000 (0:00:02.430) 0:01:07.243 ****** 2026-02-17 03:39:11.500990 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:39:11.500996 | orchestrator | 2026-02-17 03:39:11.501000 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 03:39:11.501005 | orchestrator | Tuesday 17 February 2026 03:39:08 +0000 (0:00:01.312) 0:01:08.556 ****** 2026-02-17 03:39:11.501009 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.501013 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.501017 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.501022 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.501026 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.501030 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.501035 | orchestrator | 2026-02-17 03:39:11.501039 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 03:39:11.501043 | orchestrator | Tuesday 17 February 2026 03:39:09 +0000 (0:00:00.705) 0:01:09.261 ****** 2026-02-17 03:39:11.501048 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:39:11.501052 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:39:11.501056 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:39:11.501060 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:39:11.501064 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:39:11.501068 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:39:11.501073 | orchestrator | 2026-02-17 03:39:11.501077 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 03:39:11.501081 | orchestrator | Tuesday 17 February 2026 03:39:10 +0000 (0:00:00.870) 0:01:10.132 ****** 2026-02-17 03:39:11.501089 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 03:40:23.075020 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 03:40:23.075226 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 03:40:23.075256 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 03:40:23.075276 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 03:40:23.075295 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 03:40:23.075312 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 03:40:23.075332 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 03:40:23.075350 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 03:40:23.075369 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 03:40:23.075389 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 03:40:23.075407 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 03:40:23.075426 | orchestrator | 2026-02-17 03:40:23.075448 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 03:40:23.075467 | orchestrator | Tuesday 17 February 2026 03:39:11 +0000 (0:00:01.395) 0:01:11.528 ****** 2026-02-17 03:40:23.075485 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:40:23.075566 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:40:23.075587 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:40:23.075605 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:40:23.075623 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:40:23.075642 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:40:23.075661 | orchestrator | 2026-02-17 03:40:23.075679 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 03:40:23.075697 | orchestrator | Tuesday 17 February 2026 03:39:12 +0000 (0:00:01.298) 0:01:12.826 ****** 2026-02-17 03:40:23.075716 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.075735 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.075753 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:23.075774 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:23.075792 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:23.075812 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:23.075832 | orchestrator | 2026-02-17 03:40:23.075851 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 03:40:23.075870 | orchestrator | Tuesday 17 February 2026 03:39:13 +0000 (0:00:00.660) 0:01:13.486 ****** 2026-02-17 03:40:23.075898 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.075916 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.075934 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:23.075951 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:23.075969 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:23.075987 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:23.076006 | orchestrator | 2026-02-17 03:40:23.076026 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 03:40:23.076045 | orchestrator | Tuesday 17 February 2026 03:39:14 +0000 (0:00:01.004) 0:01:14.491 ****** 2026-02-17 03:40:23.076064 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.076082 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.076099 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:23.076116 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:23.076133 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:23.076152 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:23.076171 | orchestrator | 2026-02-17 03:40:23.076189 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 03:40:23.076209 | orchestrator | Tuesday 17 February 2026 03:39:15 +0000 (0:00:00.689) 0:01:15.180 ****** 2026-02-17 03:40:23.076250 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:40:23.076271 | orchestrator | 2026-02-17 03:40:23.076290 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 03:40:23.076310 | orchestrator | Tuesday 17 February 2026 03:39:16 +0000 (0:00:01.411) 0:01:16.592 ****** 2026-02-17 03:40:23.076329 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:40:23.076349 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:23.076366 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:40:23.076377 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:40:23.076388 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:23.076398 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:23.076409 | orchestrator | 2026-02-17 03:40:23.076420 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 03:40:23.076463 | orchestrator | Tuesday 17 February 2026 03:40:13 +0000 (0:00:56.480) 0:02:13.073 ****** 2026-02-17 03:40:23.076475 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 03:40:23.076523 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 03:40:23.076543 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 03:40:23.076555 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.076566 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 03:40:23.076577 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 03:40:23.076589 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 03:40:23.076600 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.076611 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 03:40:23.076653 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 03:40:23.076677 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 03:40:23.076689 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:23.076700 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 03:40:23.076711 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 03:40:23.076722 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 03:40:23.076733 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:23.076744 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 03:40:23.076754 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 03:40:23.076765 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 03:40:23.076776 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:23.076787 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 03:40:23.076798 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 03:40:23.076809 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 03:40:23.076820 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:23.076831 | orchestrator | 2026-02-17 03:40:23.076843 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 03:40:23.076862 | orchestrator | Tuesday 17 February 2026 03:40:13 +0000 (0:00:00.749) 0:02:13.823 ****** 2026-02-17 03:40:23.076879 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.076898 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.076917 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:23.076932 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:23.076944 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:23.076966 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:23.076977 | orchestrator | 2026-02-17 03:40:23.076988 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 03:40:23.076999 | orchestrator | Tuesday 17 February 2026 03:40:14 +0000 (0:00:00.893) 0:02:14.717 ****** 2026-02-17 03:40:23.077010 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.077021 | orchestrator | 2026-02-17 03:40:23.077032 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 03:40:23.077043 | orchestrator | Tuesday 17 February 2026 03:40:14 +0000 (0:00:00.180) 0:02:14.898 ****** 2026-02-17 03:40:23.077054 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.077065 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.077076 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:23.077086 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:23.077097 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:23.077108 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:23.077119 | orchestrator | 2026-02-17 03:40:23.077130 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 03:40:23.077141 | orchestrator | Tuesday 17 February 2026 03:40:15 +0000 (0:00:00.665) 0:02:15.563 ****** 2026-02-17 03:40:23.077152 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.077163 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.077173 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:23.077184 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:23.077195 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:23.077206 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:23.077216 | orchestrator | 2026-02-17 03:40:23.077227 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 03:40:23.077238 | orchestrator | Tuesday 17 February 2026 03:40:16 +0000 (0:00:00.894) 0:02:16.457 ****** 2026-02-17 03:40:23.077249 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.077260 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.077271 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:23.077281 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:23.077292 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:23.077303 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:23.077314 | orchestrator | 2026-02-17 03:40:23.077325 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 03:40:23.077336 | orchestrator | Tuesday 17 February 2026 03:40:17 +0000 (0:00:00.711) 0:02:17.169 ****** 2026-02-17 03:40:23.077347 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:23.077358 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:23.077369 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:23.077380 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:40:23.077390 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:40:23.077401 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:40:23.077412 | orchestrator | 2026-02-17 03:40:23.077423 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 03:40:23.077434 | orchestrator | Tuesday 17 February 2026 03:40:20 +0000 (0:00:03.286) 0:02:20.456 ****** 2026-02-17 03:40:23.077445 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:23.077456 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:23.077466 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:23.077477 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:40:23.077488 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:40:23.077521 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:40:23.077532 | orchestrator | 2026-02-17 03:40:23.077543 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 03:40:23.077554 | orchestrator | Tuesday 17 February 2026 03:40:21 +0000 (0:00:00.650) 0:02:21.107 ****** 2026-02-17 03:40:23.077566 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:40:23.077580 | orchestrator | 2026-02-17 03:40:23.077591 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 03:40:23.077608 | orchestrator | Tuesday 17 February 2026 03:40:22 +0000 (0:00:01.389) 0:02:22.496 ****** 2026-02-17 03:40:23.077619 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:23.077630 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:23.077650 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:38.093763 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:38.093869 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:38.093876 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:38.093880 | orchestrator | 2026-02-17 03:40:38.093885 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 03:40:38.093891 | orchestrator | Tuesday 17 February 2026 03:40:23 +0000 (0:00:00.869) 0:02:23.365 ****** 2026-02-17 03:40:38.093895 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:38.093899 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:38.093903 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:38.093907 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:38.093911 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:38.093914 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:38.093918 | orchestrator | 2026-02-17 03:40:38.093922 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 03:40:38.093926 | orchestrator | Tuesday 17 February 2026 03:40:23 +0000 (0:00:00.665) 0:02:24.031 ****** 2026-02-17 03:40:38.093930 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:38.093934 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:38.093938 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:38.093942 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:38.093945 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:38.093949 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:38.093953 | orchestrator | 2026-02-17 03:40:38.093957 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 03:40:38.093961 | orchestrator | Tuesday 17 February 2026 03:40:24 +0000 (0:00:00.938) 0:02:24.970 ****** 2026-02-17 03:40:38.093965 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:38.093969 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:38.093973 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:38.093977 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:38.093981 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:38.093984 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:38.093988 | orchestrator | 2026-02-17 03:40:38.093992 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 03:40:38.093996 | orchestrator | Tuesday 17 February 2026 03:40:25 +0000 (0:00:00.707) 0:02:25.677 ****** 2026-02-17 03:40:38.094000 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:38.094004 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:38.094007 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:38.094045 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:38.094049 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:38.094054 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:38.094058 | orchestrator | 2026-02-17 03:40:38.094061 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 03:40:38.094065 | orchestrator | Tuesday 17 February 2026 03:40:26 +0000 (0:00:00.938) 0:02:26.616 ****** 2026-02-17 03:40:38.094069 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:38.094073 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:38.094077 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:38.094081 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:38.094084 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:38.094088 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:38.094092 | orchestrator | 2026-02-17 03:40:38.094096 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 03:40:38.094100 | orchestrator | Tuesday 17 February 2026 03:40:27 +0000 (0:00:00.686) 0:02:27.303 ****** 2026-02-17 03:40:38.094120 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:38.094124 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:38.094129 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:38.094132 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:38.094136 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:38.094140 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:38.094144 | orchestrator | 2026-02-17 03:40:38.094148 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 03:40:38.094151 | orchestrator | Tuesday 17 February 2026 03:40:28 +0000 (0:00:00.911) 0:02:28.215 ****** 2026-02-17 03:40:38.094155 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:38.094159 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:38.094163 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:38.094167 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:38.094170 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:38.094174 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:38.094178 | orchestrator | 2026-02-17 03:40:38.094182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 03:40:38.094186 | orchestrator | Tuesday 17 February 2026 03:40:29 +0000 (0:00:00.923) 0:02:29.138 ****** 2026-02-17 03:40:38.094190 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:38.094194 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:38.094198 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:38.094202 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:40:38.094206 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:40:38.094210 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:40:38.094214 | orchestrator | 2026-02-17 03:40:38.094217 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 03:40:38.094221 | orchestrator | Tuesday 17 February 2026 03:40:30 +0000 (0:00:01.317) 0:02:30.456 ****** 2026-02-17 03:40:38.094226 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:40:38.094233 | orchestrator | 2026-02-17 03:40:38.094236 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 03:40:38.094240 | orchestrator | Tuesday 17 February 2026 03:40:31 +0000 (0:00:01.333) 0:02:31.790 ****** 2026-02-17 03:40:38.094244 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-17 03:40:38.094249 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-17 03:40:38.094253 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-17 03:40:38.094256 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-17 03:40:38.094261 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-17 03:40:38.094266 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-17 03:40:38.094289 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-17 03:40:38.094297 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-17 03:40:38.094301 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-17 03:40:38.094306 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-17 03:40:38.094310 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-17 03:40:38.094315 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-17 03:40:38.094319 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-17 03:40:38.094323 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-17 03:40:38.094328 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-17 03:40:38.094332 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-17 03:40:38.094336 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-17 03:40:38.094341 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-17 03:40:38.094345 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-17 03:40:38.094354 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-17 03:40:38.094359 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-17 03:40:38.094363 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-17 03:40:38.094368 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-17 03:40:38.094372 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-17 03:40:38.094376 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-17 03:40:38.094381 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-17 03:40:38.094385 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-17 03:40:38.094389 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-17 03:40:38.094394 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-17 03:40:38.094398 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-17 03:40:38.094402 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-17 03:40:38.094407 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-17 03:40:38.094411 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-17 03:40:38.094415 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-17 03:40:38.094419 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-17 03:40:38.094424 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-17 03:40:38.094428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-17 03:40:38.094433 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-17 03:40:38.094437 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-17 03:40:38.094442 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-17 03:40:38.094447 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-17 03:40:38.094454 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-17 03:40:38.094460 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-17 03:40:38.094466 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-17 03:40:38.094472 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-17 03:40:38.094478 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-17 03:40:38.094485 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 03:40:38.094490 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 03:40:38.094496 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 03:40:38.094503 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-17 03:40:38.094509 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-17 03:40:38.094539 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 03:40:38.094545 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 03:40:38.094550 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 03:40:38.094555 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 03:40:38.094559 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 03:40:38.094563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 03:40:38.094567 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 03:40:38.094572 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 03:40:38.094576 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 03:40:38.094581 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 03:40:38.094589 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 03:40:38.094594 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 03:40:38.094598 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 03:40:38.094602 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 03:40:38.094607 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 03:40:38.094615 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 03:40:51.950401 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 03:40:51.950612 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 03:40:51.950647 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 03:40:51.950668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 03:40:51.950688 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 03:40:51.950708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 03:40:51.950728 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 03:40:51.950748 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 03:40:51.950768 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 03:40:51.950788 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 03:40:51.950807 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 03:40:51.950826 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 03:40:51.950846 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 03:40:51.950865 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 03:40:51.950887 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-17 03:40:51.950908 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 03:40:51.950929 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-17 03:40:51.950952 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-17 03:40:51.950976 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 03:40:51.950998 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 03:40:51.951021 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-17 03:40:51.951045 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-17 03:40:51.951066 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-17 03:40:51.951089 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-17 03:40:51.951113 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-17 03:40:51.951134 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-17 03:40:51.951156 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-17 03:40:51.951176 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-17 03:40:51.951196 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-17 03:40:51.951234 | orchestrator | 2026-02-17 03:40:51.951255 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 03:40:51.951275 | orchestrator | Tuesday 17 February 2026 03:40:38 +0000 (0:00:06.294) 0:02:38.085 ****** 2026-02-17 03:40:51.951292 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.951310 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.951328 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.951347 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:40:51.951403 | orchestrator | 2026-02-17 03:40:51.951423 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 03:40:51.951440 | orchestrator | Tuesday 17 February 2026 03:40:39 +0000 (0:00:01.150) 0:02:39.235 ****** 2026-02-17 03:40:51.951487 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 03:40:51.951561 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 03:40:51.951604 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 03:40:51.951622 | orchestrator | 2026-02-17 03:40:51.951638 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 03:40:51.951654 | orchestrator | Tuesday 17 February 2026 03:40:39 +0000 (0:00:00.735) 0:02:39.971 ****** 2026-02-17 03:40:51.951673 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 03:40:51.951692 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 03:40:51.951710 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 03:40:51.951728 | orchestrator | 2026-02-17 03:40:51.951746 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 03:40:51.951762 | orchestrator | Tuesday 17 February 2026 03:40:41 +0000 (0:00:01.136) 0:02:41.108 ****** 2026-02-17 03:40:51.951774 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:51.951785 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:51.951795 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:51.951806 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.951817 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.951828 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.951839 | orchestrator | 2026-02-17 03:40:51.951850 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 03:40:51.951896 | orchestrator | Tuesday 17 February 2026 03:40:41 +0000 (0:00:00.872) 0:02:41.980 ****** 2026-02-17 03:40:51.951909 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:51.951919 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:51.951930 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:51.951941 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.951952 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.951962 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.951973 | orchestrator | 2026-02-17 03:40:51.951984 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 03:40:51.951995 | orchestrator | Tuesday 17 February 2026 03:40:42 +0000 (0:00:00.625) 0:02:42.606 ****** 2026-02-17 03:40:51.952006 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:51.952017 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:51.952027 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:51.952039 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952050 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.952061 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.952071 | orchestrator | 2026-02-17 03:40:51.952082 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 03:40:51.952093 | orchestrator | Tuesday 17 February 2026 03:40:43 +0000 (0:00:00.915) 0:02:43.521 ****** 2026-02-17 03:40:51.952104 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:51.952115 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:51.952126 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:51.952136 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952147 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.952172 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.952183 | orchestrator | 2026-02-17 03:40:51.952193 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 03:40:51.952204 | orchestrator | Tuesday 17 February 2026 03:40:44 +0000 (0:00:00.635) 0:02:44.157 ****** 2026-02-17 03:40:51.952215 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:51.952226 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:51.952237 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:51.952248 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952259 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.952269 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.952280 | orchestrator | 2026-02-17 03:40:51.952291 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 03:40:51.952303 | orchestrator | Tuesday 17 February 2026 03:40:44 +0000 (0:00:00.882) 0:02:45.039 ****** 2026-02-17 03:40:51.952314 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:51.952324 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:51.952335 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:51.952346 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952357 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.952368 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.952379 | orchestrator | 2026-02-17 03:40:51.952390 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 03:40:51.952401 | orchestrator | Tuesday 17 February 2026 03:40:45 +0000 (0:00:00.664) 0:02:45.704 ****** 2026-02-17 03:40:51.952412 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:51.952423 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:51.952434 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:51.952445 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952456 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.952466 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.952477 | orchestrator | 2026-02-17 03:40:51.952488 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 03:40:51.952499 | orchestrator | Tuesday 17 February 2026 03:40:46 +0000 (0:00:00.912) 0:02:46.617 ****** 2026-02-17 03:40:51.952510 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:51.952521 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:51.952531 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:40:51.952587 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952599 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.952610 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.952621 | orchestrator | 2026-02-17 03:40:51.952632 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 03:40:51.952643 | orchestrator | Tuesday 17 February 2026 03:40:47 +0000 (0:00:00.654) 0:02:47.271 ****** 2026-02-17 03:40:51.952653 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952664 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.952675 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.952686 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:51.952696 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:51.952707 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:51.952718 | orchestrator | 2026-02-17 03:40:51.952729 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 03:40:51.952740 | orchestrator | Tuesday 17 February 2026 03:40:49 +0000 (0:00:02.665) 0:02:49.936 ****** 2026-02-17 03:40:51.952750 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:51.952761 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:51.952774 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:51.952792 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952811 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.952826 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.952856 | orchestrator | 2026-02-17 03:40:51.952874 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 03:40:51.952906 | orchestrator | Tuesday 17 February 2026 03:40:50 +0000 (0:00:00.662) 0:02:50.599 ****** 2026-02-17 03:40:51.952924 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:40:51.952942 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:40:51.952960 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:40:51.952977 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:40:51.952995 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:40:51.953012 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:40:51.953031 | orchestrator | 2026-02-17 03:40:51.953049 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 03:40:51.953068 | orchestrator | Tuesday 17 February 2026 03:40:51 +0000 (0:00:00.981) 0:02:51.580 ****** 2026-02-17 03:40:51.953088 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:40:51.953105 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:40:51.953146 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:06.576372 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.576469 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.576481 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.576490 | orchestrator | 2026-02-17 03:41:06.576515 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 03:41:06.576525 | orchestrator | Tuesday 17 February 2026 03:40:52 +0000 (0:00:00.892) 0:02:52.473 ****** 2026-02-17 03:41:06.576535 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 03:41:06.576555 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 03:41:06.576587 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 03:41:06.576599 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.576607 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.576615 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.576623 | orchestrator | 2026-02-17 03:41:06.576632 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 03:41:06.576640 | orchestrator | Tuesday 17 February 2026 03:40:53 +0000 (0:00:00.662) 0:02:53.135 ****** 2026-02-17 03:41:06.576650 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-17 03:41:06.576661 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-17 03:41:06.576671 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.576680 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-17 03:41:06.576688 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-17 03:41:06.576696 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:06.576704 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-17 03:41:06.576737 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-17 03:41:06.576746 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:06.576754 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.576762 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.576770 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.576777 | orchestrator | 2026-02-17 03:41:06.576786 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 03:41:06.576794 | orchestrator | Tuesday 17 February 2026 03:40:54 +0000 (0:00:00.946) 0:02:54.082 ****** 2026-02-17 03:41:06.576802 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.576809 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:06.576817 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:06.576825 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.576833 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.576840 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.576848 | orchestrator | 2026-02-17 03:41:06.576856 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 03:41:06.576864 | orchestrator | Tuesday 17 February 2026 03:40:54 +0000 (0:00:00.703) 0:02:54.785 ****** 2026-02-17 03:41:06.576872 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.576880 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:06.576887 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:06.576895 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.576903 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.576911 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.576918 | orchestrator | 2026-02-17 03:41:06.576928 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 03:41:06.576974 | orchestrator | Tuesday 17 February 2026 03:40:55 +0000 (0:00:00.878) 0:02:55.663 ****** 2026-02-17 03:41:06.577010 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.577020 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:06.577030 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:06.577039 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.577049 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.577057 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.577066 | orchestrator | 2026-02-17 03:41:06.577076 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 03:41:06.577085 | orchestrator | Tuesday 17 February 2026 03:40:56 +0000 (0:00:00.695) 0:02:56.359 ****** 2026-02-17 03:41:06.577095 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.577104 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:06.577112 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:06.577120 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.577127 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.577135 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.577143 | orchestrator | 2026-02-17 03:41:06.577151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 03:41:06.577159 | orchestrator | Tuesday 17 February 2026 03:40:57 +0000 (0:00:00.874) 0:02:57.234 ****** 2026-02-17 03:41:06.577167 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.577174 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:06.577182 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:06.577190 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.577197 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.577205 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.577222 | orchestrator | 2026-02-17 03:41:06.577230 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 03:41:06.577237 | orchestrator | Tuesday 17 February 2026 03:40:57 +0000 (0:00:00.723) 0:02:57.958 ****** 2026-02-17 03:41:06.577246 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:41:06.577254 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:41:06.577262 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:41:06.577270 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.577278 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.577285 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.577293 | orchestrator | 2026-02-17 03:41:06.577301 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 03:41:06.577309 | orchestrator | Tuesday 17 February 2026 03:40:58 +0000 (0:00:00.900) 0:02:58.858 ****** 2026-02-17 03:41:06.577317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:41:06.577325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:41:06.577333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:41:06.577341 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.577349 | orchestrator | 2026-02-17 03:41:06.577357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 03:41:06.577365 | orchestrator | Tuesday 17 February 2026 03:40:59 +0000 (0:00:00.447) 0:02:59.305 ****** 2026-02-17 03:41:06.577373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:41:06.577381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:41:06.577389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:41:06.577397 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.577405 | orchestrator | 2026-02-17 03:41:06.577413 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 03:41:06.577421 | orchestrator | Tuesday 17 February 2026 03:40:59 +0000 (0:00:00.446) 0:02:59.752 ****** 2026-02-17 03:41:06.577429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:41:06.577437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:41:06.577445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:41:06.577452 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:06.577460 | orchestrator | 2026-02-17 03:41:06.577468 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 03:41:06.577476 | orchestrator | Tuesday 17 February 2026 03:41:00 +0000 (0:00:00.456) 0:03:00.209 ****** 2026-02-17 03:41:06.577484 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:41:06.577492 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:41:06.577500 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:41:06.577507 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.577515 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.577523 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.577531 | orchestrator | 2026-02-17 03:41:06.577539 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 03:41:06.577547 | orchestrator | Tuesday 17 February 2026 03:41:00 +0000 (0:00:00.662) 0:03:00.872 ****** 2026-02-17 03:41:06.577555 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 03:41:06.577588 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 03:41:06.577600 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 03:41:06.577613 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-17 03:41:06.577627 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:06.577643 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-17 03:41:06.577657 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:06.577668 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-17 03:41:06.577676 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:06.577684 | orchestrator | 2026-02-17 03:41:06.577692 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 03:41:06.577707 | orchestrator | Tuesday 17 February 2026 03:41:02 +0000 (0:00:01.888) 0:03:02.760 ****** 2026-02-17 03:41:06.577715 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:41:06.577723 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:41:06.577731 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:41:06.577739 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:41:06.577747 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:41:06.577755 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:41:06.577763 | orchestrator | 2026-02-17 03:41:06.577771 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-17 03:41:06.577779 | orchestrator | Tuesday 17 February 2026 03:41:05 +0000 (0:00:02.823) 0:03:05.584 ****** 2026-02-17 03:41:06.577787 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:41:06.577806 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:41:24.241632 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:41:24.241732 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:41:24.241744 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:41:24.241771 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:41:24.241778 | orchestrator | 2026-02-17 03:41:24.241786 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-17 03:41:24.241795 | orchestrator | Tuesday 17 February 2026 03:41:06 +0000 (0:00:01.020) 0:03:06.604 ****** 2026-02-17 03:41:24.241801 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.241808 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:24.241815 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:24.241823 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:41:24.241829 | orchestrator | 2026-02-17 03:41:24.241836 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-17 03:41:24.241842 | orchestrator | Tuesday 17 February 2026 03:41:07 +0000 (0:00:01.214) 0:03:07.819 ****** 2026-02-17 03:41:24.241849 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:24.241857 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:24.241865 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:24.241869 | orchestrator | 2026-02-17 03:41:24.241873 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-17 03:41:24.241877 | orchestrator | Tuesday 17 February 2026 03:41:08 +0000 (0:00:00.590) 0:03:08.410 ****** 2026-02-17 03:41:24.241881 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:41:24.241885 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:41:24.241889 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:41:24.241893 | orchestrator | 2026-02-17 03:41:24.241897 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-17 03:41:24.241901 | orchestrator | Tuesday 17 February 2026 03:41:09 +0000 (0:00:01.235) 0:03:09.645 ****** 2026-02-17 03:41:24.241906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 03:41:24.241910 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 03:41:24.241914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 03:41:24.241918 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:24.241921 | orchestrator | 2026-02-17 03:41:24.241925 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-17 03:41:24.241929 | orchestrator | Tuesday 17 February 2026 03:41:10 +0000 (0:00:00.729) 0:03:10.374 ****** 2026-02-17 03:41:24.241933 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:24.241937 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:24.241941 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:24.241945 | orchestrator | 2026-02-17 03:41:24.241948 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-17 03:41:24.241952 | orchestrator | Tuesday 17 February 2026 03:41:10 +0000 (0:00:00.404) 0:03:10.779 ****** 2026-02-17 03:41:24.241956 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:24.241960 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:24.241964 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:24.241983 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:41:24.241987 | orchestrator | 2026-02-17 03:41:24.241991 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-17 03:41:24.241995 | orchestrator | Tuesday 17 February 2026 03:41:11 +0000 (0:00:01.147) 0:03:11.926 ****** 2026-02-17 03:41:24.241998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:41:24.242002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:41:24.242006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:41:24.242010 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242052 | orchestrator | 2026-02-17 03:41:24.242058 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-17 03:41:24.242065 | orchestrator | Tuesday 17 February 2026 03:41:12 +0000 (0:00:00.428) 0:03:12.354 ****** 2026-02-17 03:41:24.242072 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242079 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:24.242083 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:24.242087 | orchestrator | 2026-02-17 03:41:24.242090 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-17 03:41:24.242094 | orchestrator | Tuesday 17 February 2026 03:41:12 +0000 (0:00:00.357) 0:03:12.712 ****** 2026-02-17 03:41:24.242098 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242102 | orchestrator | 2026-02-17 03:41:24.242106 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-17 03:41:24.242110 | orchestrator | Tuesday 17 February 2026 03:41:12 +0000 (0:00:00.255) 0:03:12.968 ****** 2026-02-17 03:41:24.242113 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242117 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:24.242121 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:24.242125 | orchestrator | 2026-02-17 03:41:24.242129 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-17 03:41:24.242132 | orchestrator | Tuesday 17 February 2026 03:41:13 +0000 (0:00:00.594) 0:03:13.562 ****** 2026-02-17 03:41:24.242154 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242158 | orchestrator | 2026-02-17 03:41:24.242163 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-17 03:41:24.242173 | orchestrator | Tuesday 17 February 2026 03:41:13 +0000 (0:00:00.256) 0:03:13.819 ****** 2026-02-17 03:41:24.242178 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242182 | orchestrator | 2026-02-17 03:41:24.242187 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-17 03:41:24.242191 | orchestrator | Tuesday 17 February 2026 03:41:14 +0000 (0:00:00.287) 0:03:14.106 ****** 2026-02-17 03:41:24.242195 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242200 | orchestrator | 2026-02-17 03:41:24.242204 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-17 03:41:24.242209 | orchestrator | Tuesday 17 February 2026 03:41:14 +0000 (0:00:00.142) 0:03:14.249 ****** 2026-02-17 03:41:24.242222 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242227 | orchestrator | 2026-02-17 03:41:24.242245 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-17 03:41:24.242250 | orchestrator | Tuesday 17 February 2026 03:41:14 +0000 (0:00:00.269) 0:03:14.518 ****** 2026-02-17 03:41:24.242254 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242258 | orchestrator | 2026-02-17 03:41:24.242263 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-17 03:41:24.242268 | orchestrator | Tuesday 17 February 2026 03:41:14 +0000 (0:00:00.264) 0:03:14.783 ****** 2026-02-17 03:41:24.242272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:41:24.242277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:41:24.242281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:41:24.242291 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242296 | orchestrator | 2026-02-17 03:41:24.242300 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-17 03:41:24.242304 | orchestrator | Tuesday 17 February 2026 03:41:15 +0000 (0:00:00.424) 0:03:15.207 ****** 2026-02-17 03:41:24.242309 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242313 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:24.242318 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:24.242322 | orchestrator | 2026-02-17 03:41:24.242326 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-17 03:41:24.242331 | orchestrator | Tuesday 17 February 2026 03:41:15 +0000 (0:00:00.363) 0:03:15.570 ****** 2026-02-17 03:41:24.242335 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242339 | orchestrator | 2026-02-17 03:41:24.242344 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-17 03:41:24.242348 | orchestrator | Tuesday 17 February 2026 03:41:15 +0000 (0:00:00.259) 0:03:15.830 ****** 2026-02-17 03:41:24.242352 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242357 | orchestrator | 2026-02-17 03:41:24.242361 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-17 03:41:24.242366 | orchestrator | Tuesday 17 February 2026 03:41:16 +0000 (0:00:00.781) 0:03:16.612 ****** 2026-02-17 03:41:24.242370 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:24.242375 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:24.242379 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:24.242383 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:41:24.242388 | orchestrator | 2026-02-17 03:41:24.242392 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-17 03:41:24.242397 | orchestrator | Tuesday 17 February 2026 03:41:17 +0000 (0:00:00.916) 0:03:17.529 ****** 2026-02-17 03:41:24.242401 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:41:24.242405 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:41:24.242411 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:41:24.242418 | orchestrator | 2026-02-17 03:41:24.242424 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-17 03:41:24.242432 | orchestrator | Tuesday 17 February 2026 03:41:18 +0000 (0:00:00.576) 0:03:18.106 ****** 2026-02-17 03:41:24.242441 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:41:24.242447 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:41:24.242453 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:41:24.242458 | orchestrator | 2026-02-17 03:41:24.242464 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-17 03:41:24.242469 | orchestrator | Tuesday 17 February 2026 03:41:19 +0000 (0:00:01.203) 0:03:19.309 ****** 2026-02-17 03:41:24.242474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:41:24.242480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:41:24.242486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:41:24.242491 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:24.242497 | orchestrator | 2026-02-17 03:41:24.242503 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-17 03:41:24.242510 | orchestrator | Tuesday 17 February 2026 03:41:19 +0000 (0:00:00.662) 0:03:19.971 ****** 2026-02-17 03:41:24.242516 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:41:24.242522 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:41:24.242529 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:41:24.242535 | orchestrator | 2026-02-17 03:41:24.242541 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-17 03:41:24.242548 | orchestrator | Tuesday 17 February 2026 03:41:20 +0000 (0:00:00.331) 0:03:20.302 ****** 2026-02-17 03:41:24.242552 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:24.242556 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:24.242560 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:24.242569 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:41:24.242573 | orchestrator | 2026-02-17 03:41:24.242577 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-17 03:41:24.242580 | orchestrator | Tuesday 17 February 2026 03:41:21 +0000 (0:00:01.141) 0:03:21.444 ****** 2026-02-17 03:41:24.242584 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:41:24.242622 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:41:24.242626 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:41:24.242630 | orchestrator | 2026-02-17 03:41:24.242634 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-17 03:41:24.242638 | orchestrator | Tuesday 17 February 2026 03:41:21 +0000 (0:00:00.358) 0:03:21.802 ****** 2026-02-17 03:41:24.242641 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:41:24.242645 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:41:24.242649 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:41:24.242653 | orchestrator | 2026-02-17 03:41:24.242657 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-17 03:41:24.242661 | orchestrator | Tuesday 17 February 2026 03:41:23 +0000 (0:00:01.277) 0:03:23.079 ****** 2026-02-17 03:41:24.242664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:41:24.242672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:41:24.242682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:41:40.793986 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:40.794131 | orchestrator | 2026-02-17 03:41:40.794144 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-17 03:41:40.794156 | orchestrator | Tuesday 17 February 2026 03:41:24 +0000 (0:00:01.184) 0:03:24.263 ****** 2026-02-17 03:41:40.794165 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:41:40.794203 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:41:40.794211 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:41:40.794219 | orchestrator | 2026-02-17 03:41:40.794229 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-17 03:41:40.794238 | orchestrator | Tuesday 17 February 2026 03:41:24 +0000 (0:00:00.387) 0:03:24.651 ****** 2026-02-17 03:41:40.794248 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:40.794256 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:40.794266 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:40.794274 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.794283 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.794292 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.794301 | orchestrator | 2026-02-17 03:41:40.794310 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-17 03:41:40.794319 | orchestrator | Tuesday 17 February 2026 03:41:25 +0000 (0:00:00.688) 0:03:25.339 ****** 2026-02-17 03:41:40.794328 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:41:40.794337 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:41:40.794346 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:41:40.794356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:41:40.794365 | orchestrator | 2026-02-17 03:41:40.794374 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-17 03:41:40.794383 | orchestrator | Tuesday 17 February 2026 03:41:26 +0000 (0:00:01.140) 0:03:26.480 ****** 2026-02-17 03:41:40.794392 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.794400 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.794409 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.794418 | orchestrator | 2026-02-17 03:41:40.794427 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-17 03:41:40.794436 | orchestrator | Tuesday 17 February 2026 03:41:26 +0000 (0:00:00.356) 0:03:26.837 ****** 2026-02-17 03:41:40.794445 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:41:40.794479 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:41:40.794489 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:41:40.794498 | orchestrator | 2026-02-17 03:41:40.794507 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-17 03:41:40.794516 | orchestrator | Tuesday 17 February 2026 03:41:28 +0000 (0:00:01.469) 0:03:28.306 ****** 2026-02-17 03:41:40.794526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 03:41:40.794534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 03:41:40.794543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 03:41:40.794551 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.794560 | orchestrator | 2026-02-17 03:41:40.794568 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-17 03:41:40.794577 | orchestrator | Tuesday 17 February 2026 03:41:28 +0000 (0:00:00.657) 0:03:28.964 ****** 2026-02-17 03:41:40.794585 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.794594 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.794603 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.794649 | orchestrator | 2026-02-17 03:41:40.794658 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-17 03:41:40.794666 | orchestrator | 2026-02-17 03:41:40.794674 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 03:41:40.794683 | orchestrator | Tuesday 17 February 2026 03:41:29 +0000 (0:00:00.605) 0:03:29.569 ****** 2026-02-17 03:41:40.794692 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:41:40.794713 | orchestrator | 2026-02-17 03:41:40.794722 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 03:41:40.794730 | orchestrator | Tuesday 17 February 2026 03:41:30 +0000 (0:00:00.826) 0:03:30.396 ****** 2026-02-17 03:41:40.794738 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:41:40.794746 | orchestrator | 2026-02-17 03:41:40.794754 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 03:41:40.794762 | orchestrator | Tuesday 17 February 2026 03:41:30 +0000 (0:00:00.564) 0:03:30.961 ****** 2026-02-17 03:41:40.794771 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.794779 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.794787 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.794795 | orchestrator | 2026-02-17 03:41:40.794804 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 03:41:40.794812 | orchestrator | Tuesday 17 February 2026 03:41:31 +0000 (0:00:00.766) 0:03:31.727 ****** 2026-02-17 03:41:40.794820 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.794828 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.794836 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.794844 | orchestrator | 2026-02-17 03:41:40.794852 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 03:41:40.794861 | orchestrator | Tuesday 17 February 2026 03:41:32 +0000 (0:00:00.618) 0:03:32.345 ****** 2026-02-17 03:41:40.794870 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.794878 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.794886 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.794894 | orchestrator | 2026-02-17 03:41:40.794902 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 03:41:40.794910 | orchestrator | Tuesday 17 February 2026 03:41:32 +0000 (0:00:00.355) 0:03:32.701 ****** 2026-02-17 03:41:40.794918 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.794926 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.794947 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.794956 | orchestrator | 2026-02-17 03:41:40.794980 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 03:41:40.794988 | orchestrator | Tuesday 17 February 2026 03:41:32 +0000 (0:00:00.336) 0:03:33.038 ****** 2026-02-17 03:41:40.795013 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.795022 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.795031 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.795040 | orchestrator | 2026-02-17 03:41:40.795049 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 03:41:40.795058 | orchestrator | Tuesday 17 February 2026 03:41:33 +0000 (0:00:00.728) 0:03:33.766 ****** 2026-02-17 03:41:40.795067 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.795076 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.795085 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.795094 | orchestrator | 2026-02-17 03:41:40.795102 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 03:41:40.795111 | orchestrator | Tuesday 17 February 2026 03:41:34 +0000 (0:00:00.640) 0:03:34.407 ****** 2026-02-17 03:41:40.795120 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.795129 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.795137 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.795146 | orchestrator | 2026-02-17 03:41:40.795155 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 03:41:40.795164 | orchestrator | Tuesday 17 February 2026 03:41:34 +0000 (0:00:00.343) 0:03:34.750 ****** 2026-02-17 03:41:40.795173 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.795181 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.795190 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.795199 | orchestrator | 2026-02-17 03:41:40.795208 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 03:41:40.795217 | orchestrator | Tuesday 17 February 2026 03:41:35 +0000 (0:00:00.718) 0:03:35.469 ****** 2026-02-17 03:41:40.795226 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.795234 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.795243 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.795252 | orchestrator | 2026-02-17 03:41:40.795261 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 03:41:40.795270 | orchestrator | Tuesday 17 February 2026 03:41:36 +0000 (0:00:01.011) 0:03:36.481 ****** 2026-02-17 03:41:40.795279 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.795288 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.795296 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.795305 | orchestrator | 2026-02-17 03:41:40.795314 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 03:41:40.795323 | orchestrator | Tuesday 17 February 2026 03:41:36 +0000 (0:00:00.323) 0:03:36.805 ****** 2026-02-17 03:41:40.795332 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.795340 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.795349 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.795358 | orchestrator | 2026-02-17 03:41:40.795367 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 03:41:40.795376 | orchestrator | Tuesday 17 February 2026 03:41:37 +0000 (0:00:00.357) 0:03:37.162 ****** 2026-02-17 03:41:40.795385 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.795394 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.795403 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.795412 | orchestrator | 2026-02-17 03:41:40.795421 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 03:41:40.795429 | orchestrator | Tuesday 17 February 2026 03:41:37 +0000 (0:00:00.339) 0:03:37.502 ****** 2026-02-17 03:41:40.795438 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.795447 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.795455 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.795464 | orchestrator | 2026-02-17 03:41:40.795472 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 03:41:40.795481 | orchestrator | Tuesday 17 February 2026 03:41:38 +0000 (0:00:00.637) 0:03:38.139 ****** 2026-02-17 03:41:40.795490 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.795505 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.795514 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.795522 | orchestrator | 2026-02-17 03:41:40.795531 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 03:41:40.795540 | orchestrator | Tuesday 17 February 2026 03:41:38 +0000 (0:00:00.361) 0:03:38.500 ****** 2026-02-17 03:41:40.795549 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.795558 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.795563 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.795568 | orchestrator | 2026-02-17 03:41:40.795573 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 03:41:40.795578 | orchestrator | Tuesday 17 February 2026 03:41:38 +0000 (0:00:00.326) 0:03:38.827 ****** 2026-02-17 03:41:40.795582 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:41:40.795587 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:41:40.795592 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:41:40.795597 | orchestrator | 2026-02-17 03:41:40.795602 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 03:41:40.795606 | orchestrator | Tuesday 17 February 2026 03:41:39 +0000 (0:00:00.346) 0:03:39.173 ****** 2026-02-17 03:41:40.795627 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.795633 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.795637 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.795642 | orchestrator | 2026-02-17 03:41:40.795647 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 03:41:40.795652 | orchestrator | Tuesday 17 February 2026 03:41:39 +0000 (0:00:00.630) 0:03:39.804 ****** 2026-02-17 03:41:40.795656 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.795661 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.795666 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.795671 | orchestrator | 2026-02-17 03:41:40.795676 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 03:41:40.795680 | orchestrator | Tuesday 17 February 2026 03:41:40 +0000 (0:00:00.394) 0:03:40.198 ****** 2026-02-17 03:41:40.795685 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:41:40.795690 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:41:40.795695 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:41:40.795700 | orchestrator | 2026-02-17 03:41:40.795709 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-17 03:41:40.795719 | orchestrator | Tuesday 17 February 2026 03:41:40 +0000 (0:00:00.620) 0:03:40.818 ****** 2026-02-17 03:42:30.969374 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:42:30.969580 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:42:30.969608 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:42:30.969627 | orchestrator | 2026-02-17 03:42:30.969646 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-17 03:42:30.969661 | orchestrator | Tuesday 17 February 2026 03:41:41 +0000 (0:00:00.676) 0:03:41.495 ****** 2026-02-17 03:42:30.969672 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:42:30.969682 | orchestrator | 2026-02-17 03:42:30.969755 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-17 03:42:30.969776 | orchestrator | Tuesday 17 February 2026 03:41:42 +0000 (0:00:00.614) 0:03:42.109 ****** 2026-02-17 03:42:30.969787 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:42:30.969798 | orchestrator | 2026-02-17 03:42:30.969808 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-17 03:42:30.969818 | orchestrator | Tuesday 17 February 2026 03:41:42 +0000 (0:00:00.162) 0:03:42.272 ****** 2026-02-17 03:42:30.969828 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-17 03:42:30.969838 | orchestrator | 2026-02-17 03:42:30.969847 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-17 03:42:30.969857 | orchestrator | Tuesday 17 February 2026 03:41:43 +0000 (0:00:01.086) 0:03:43.359 ****** 2026-02-17 03:42:30.969896 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:42:30.969909 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:42:30.969920 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:42:30.969932 | orchestrator | 2026-02-17 03:42:30.969943 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-17 03:42:30.969954 | orchestrator | Tuesday 17 February 2026 03:41:43 +0000 (0:00:00.610) 0:03:43.970 ****** 2026-02-17 03:42:30.969965 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:42:30.969976 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:42:30.969986 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:42:30.969997 | orchestrator | 2026-02-17 03:42:30.970008 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-17 03:42:30.970084 | orchestrator | Tuesday 17 February 2026 03:41:44 +0000 (0:00:00.366) 0:03:44.337 ****** 2026-02-17 03:42:30.970097 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.970109 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.970120 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.970131 | orchestrator | 2026-02-17 03:42:30.970141 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-17 03:42:30.970151 | orchestrator | Tuesday 17 February 2026 03:41:45 +0000 (0:00:01.232) 0:03:45.570 ****** 2026-02-17 03:42:30.970160 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.970170 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.970180 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.970190 | orchestrator | 2026-02-17 03:42:30.970200 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-17 03:42:30.970216 | orchestrator | Tuesday 17 February 2026 03:41:46 +0000 (0:00:00.834) 0:03:46.404 ****** 2026-02-17 03:42:30.970233 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.970249 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.970266 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.970283 | orchestrator | 2026-02-17 03:42:30.970299 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-17 03:42:30.970315 | orchestrator | Tuesday 17 February 2026 03:41:47 +0000 (0:00:00.979) 0:03:47.383 ****** 2026-02-17 03:42:30.970326 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:42:30.970336 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:42:30.970345 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:42:30.970355 | orchestrator | 2026-02-17 03:42:30.970364 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-17 03:42:30.970374 | orchestrator | Tuesday 17 February 2026 03:41:48 +0000 (0:00:00.680) 0:03:48.063 ****** 2026-02-17 03:42:30.970384 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.970393 | orchestrator | 2026-02-17 03:42:30.970402 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-17 03:42:30.970412 | orchestrator | Tuesday 17 February 2026 03:41:49 +0000 (0:00:01.276) 0:03:49.340 ****** 2026-02-17 03:42:30.970421 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:42:30.970436 | orchestrator | 2026-02-17 03:42:30.970451 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-17 03:42:30.970467 | orchestrator | Tuesday 17 February 2026 03:41:50 +0000 (0:00:00.701) 0:03:50.041 ****** 2026-02-17 03:42:30.970482 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 03:42:30.970498 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:42:30.970515 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:42:30.970531 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 03:42:30.970548 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-17 03:42:30.970558 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 03:42:30.970568 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 03:42:30.970583 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-17 03:42:30.970599 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 03:42:30.970630 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-17 03:42:30.970647 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-17 03:42:30.970663 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-17 03:42:30.970679 | orchestrator | 2026-02-17 03:42:30.970719 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-17 03:42:30.970730 | orchestrator | Tuesday 17 February 2026 03:41:53 +0000 (0:00:03.279) 0:03:53.321 ****** 2026-02-17 03:42:30.970740 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.970750 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.970775 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.970785 | orchestrator | 2026-02-17 03:42:30.970795 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-17 03:42:30.970825 | orchestrator | Tuesday 17 February 2026 03:41:54 +0000 (0:00:01.211) 0:03:54.533 ****** 2026-02-17 03:42:30.970836 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:42:30.970846 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:42:30.970855 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:42:30.970865 | orchestrator | 2026-02-17 03:42:30.970875 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-17 03:42:30.970884 | orchestrator | Tuesday 17 February 2026 03:41:55 +0000 (0:00:00.695) 0:03:55.228 ****** 2026-02-17 03:42:30.970893 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:42:30.970903 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:42:30.970912 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:42:30.970922 | orchestrator | 2026-02-17 03:42:30.970931 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-17 03:42:30.970941 | orchestrator | Tuesday 17 February 2026 03:41:55 +0000 (0:00:00.357) 0:03:55.586 ****** 2026-02-17 03:42:30.970950 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.970960 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.970970 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.970979 | orchestrator | 2026-02-17 03:42:30.970989 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-17 03:42:30.970998 | orchestrator | Tuesday 17 February 2026 03:41:56 +0000 (0:00:01.451) 0:03:57.037 ****** 2026-02-17 03:42:30.971008 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.971018 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.971027 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.971036 | orchestrator | 2026-02-17 03:42:30.971046 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-17 03:42:30.971055 | orchestrator | Tuesday 17 February 2026 03:41:58 +0000 (0:00:01.575) 0:03:58.613 ****** 2026-02-17 03:42:30.971065 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:42:30.971075 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:42:30.971091 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:42:30.971114 | orchestrator | 2026-02-17 03:42:30.971132 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-17 03:42:30.971147 | orchestrator | Tuesday 17 February 2026 03:41:58 +0000 (0:00:00.324) 0:03:58.937 ****** 2026-02-17 03:42:30.971162 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:42:30.971178 | orchestrator | 2026-02-17 03:42:30.971196 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-17 03:42:30.971213 | orchestrator | Tuesday 17 February 2026 03:41:59 +0000 (0:00:00.569) 0:03:59.506 ****** 2026-02-17 03:42:30.971228 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:42:30.971246 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:42:30.971256 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:42:30.971266 | orchestrator | 2026-02-17 03:42:30.971275 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-17 03:42:30.971285 | orchestrator | Tuesday 17 February 2026 03:42:00 +0000 (0:00:00.603) 0:04:00.110 ****** 2026-02-17 03:42:30.971294 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:42:30.971318 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:42:30.971328 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:42:30.971337 | orchestrator | 2026-02-17 03:42:30.971347 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-17 03:42:30.971356 | orchestrator | Tuesday 17 February 2026 03:42:00 +0000 (0:00:00.349) 0:04:00.460 ****** 2026-02-17 03:42:30.971366 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:42:30.971376 | orchestrator | 2026-02-17 03:42:30.971385 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-17 03:42:30.971395 | orchestrator | Tuesday 17 February 2026 03:42:00 +0000 (0:00:00.570) 0:04:01.030 ****** 2026-02-17 03:42:30.971404 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.971414 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.971423 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.971433 | orchestrator | 2026-02-17 03:42:30.971442 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-17 03:42:30.971451 | orchestrator | Tuesday 17 February 2026 03:42:03 +0000 (0:00:02.109) 0:04:03.140 ****** 2026-02-17 03:42:30.971461 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.971470 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.971480 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.971489 | orchestrator | 2026-02-17 03:42:30.971499 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-17 03:42:30.971508 | orchestrator | Tuesday 17 February 2026 03:42:04 +0000 (0:00:01.261) 0:04:04.401 ****** 2026-02-17 03:42:30.971522 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.971539 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.971554 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.971569 | orchestrator | 2026-02-17 03:42:30.971584 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-17 03:42:30.971599 | orchestrator | Tuesday 17 February 2026 03:42:06 +0000 (0:00:01.887) 0:04:06.289 ****** 2026-02-17 03:42:30.971614 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:42:30.971630 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:42:30.971647 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:42:30.971665 | orchestrator | 2026-02-17 03:42:30.971680 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-17 03:42:30.971721 | orchestrator | Tuesday 17 February 2026 03:42:08 +0000 (0:00:02.000) 0:04:08.289 ****** 2026-02-17 03:42:30.971731 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:42:30.971741 | orchestrator | 2026-02-17 03:42:30.971751 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-17 03:42:30.971760 | orchestrator | Tuesday 17 February 2026 03:42:09 +0000 (0:00:00.838) 0:04:09.127 ****** 2026-02-17 03:42:30.971777 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-17 03:42:30.971787 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:42:30.971797 | orchestrator | 2026-02-17 03:42:30.971818 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-17 03:43:06.630397 | orchestrator | Tuesday 17 February 2026 03:42:30 +0000 (0:00:21.851) 0:04:30.979 ****** 2026-02-17 03:43:06.630507 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:06.630525 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:06.630536 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:06.630546 | orchestrator | 2026-02-17 03:43:06.630557 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-17 03:43:06.630567 | orchestrator | Tuesday 17 February 2026 03:42:40 +0000 (0:00:09.061) 0:04:40.041 ****** 2026-02-17 03:43:06.630577 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.630588 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.630597 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.630629 | orchestrator | 2026-02-17 03:43:06.630640 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-17 03:43:06.630650 | orchestrator | Tuesday 17 February 2026 03:42:40 +0000 (0:00:00.332) 0:04:40.373 ****** 2026-02-17 03:43:06.630662 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__72da69dedc13b1472300bedca84d67d9dd2dcf70'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-17 03:43:06.630674 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__72da69dedc13b1472300bedca84d67d9dd2dcf70'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-17 03:43:06.630685 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__72da69dedc13b1472300bedca84d67d9dd2dcf70'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-17 03:43:06.630696 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__72da69dedc13b1472300bedca84d67d9dd2dcf70'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-17 03:43:06.630706 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__72da69dedc13b1472300bedca84d67d9dd2dcf70'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-17 03:43:06.630717 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__72da69dedc13b1472300bedca84d67d9dd2dcf70'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__72da69dedc13b1472300bedca84d67d9dd2dcf70'}])  2026-02-17 03:43:06.630729 | orchestrator | 2026-02-17 03:43:06.630766 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-17 03:43:06.630776 | orchestrator | Tuesday 17 February 2026 03:42:55 +0000 (0:00:14.749) 0:04:55.122 ****** 2026-02-17 03:43:06.630786 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.630796 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.630805 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.630815 | orchestrator | 2026-02-17 03:43:06.630825 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-17 03:43:06.630835 | orchestrator | Tuesday 17 February 2026 03:42:55 +0000 (0:00:00.369) 0:04:55.491 ****** 2026-02-17 03:43:06.630845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:43:06.630855 | orchestrator | 2026-02-17 03:43:06.630865 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-17 03:43:06.630874 | orchestrator | Tuesday 17 February 2026 03:42:56 +0000 (0:00:00.870) 0:04:56.362 ****** 2026-02-17 03:43:06.630884 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:06.630893 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:06.630903 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:06.630913 | orchestrator | 2026-02-17 03:43:06.630931 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-17 03:43:06.630943 | orchestrator | Tuesday 17 February 2026 03:42:56 +0000 (0:00:00.368) 0:04:56.730 ****** 2026-02-17 03:43:06.630971 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.630984 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.630995 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.631007 | orchestrator | 2026-02-17 03:43:06.631035 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-17 03:43:06.631047 | orchestrator | Tuesday 17 February 2026 03:42:57 +0000 (0:00:00.392) 0:04:57.122 ****** 2026-02-17 03:43:06.631058 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 03:43:06.631070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 03:43:06.631081 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 03:43:06.631092 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.631103 | orchestrator | 2026-02-17 03:43:06.631114 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-17 03:43:06.631125 | orchestrator | Tuesday 17 February 2026 03:42:58 +0000 (0:00:00.982) 0:04:58.105 ****** 2026-02-17 03:43:06.631137 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:06.631148 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:06.631159 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:06.631170 | orchestrator | 2026-02-17 03:43:06.631181 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-17 03:43:06.631193 | orchestrator | 2026-02-17 03:43:06.631204 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 03:43:06.631215 | orchestrator | Tuesday 17 February 2026 03:42:58 +0000 (0:00:00.893) 0:04:58.998 ****** 2026-02-17 03:43:06.631226 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:43:06.631237 | orchestrator | 2026-02-17 03:43:06.631247 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 03:43:06.631257 | orchestrator | Tuesday 17 February 2026 03:42:59 +0000 (0:00:00.575) 0:04:59.574 ****** 2026-02-17 03:43:06.631266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:43:06.631276 | orchestrator | 2026-02-17 03:43:06.631286 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 03:43:06.631295 | orchestrator | Tuesday 17 February 2026 03:43:00 +0000 (0:00:00.848) 0:05:00.422 ****** 2026-02-17 03:43:06.631305 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:06.631314 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:06.631324 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:06.631333 | orchestrator | 2026-02-17 03:43:06.631343 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 03:43:06.631352 | orchestrator | Tuesday 17 February 2026 03:43:01 +0000 (0:00:00.827) 0:05:01.250 ****** 2026-02-17 03:43:06.631362 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.631372 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.631381 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.631391 | orchestrator | 2026-02-17 03:43:06.631400 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 03:43:06.631410 | orchestrator | Tuesday 17 February 2026 03:43:01 +0000 (0:00:00.346) 0:05:01.597 ****** 2026-02-17 03:43:06.631420 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.631430 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.631439 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.631449 | orchestrator | 2026-02-17 03:43:06.631458 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 03:43:06.631468 | orchestrator | Tuesday 17 February 2026 03:43:02 +0000 (0:00:00.657) 0:05:02.254 ****** 2026-02-17 03:43:06.631477 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.631487 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.631503 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.631513 | orchestrator | 2026-02-17 03:43:06.631523 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 03:43:06.631532 | orchestrator | Tuesday 17 February 2026 03:43:02 +0000 (0:00:00.357) 0:05:02.611 ****** 2026-02-17 03:43:06.631542 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:06.631552 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:06.631561 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:06.631571 | orchestrator | 2026-02-17 03:43:06.631580 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 03:43:06.631590 | orchestrator | Tuesday 17 February 2026 03:43:03 +0000 (0:00:00.737) 0:05:03.348 ****** 2026-02-17 03:43:06.631600 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.631610 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.631619 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.631629 | orchestrator | 2026-02-17 03:43:06.631638 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 03:43:06.631648 | orchestrator | Tuesday 17 February 2026 03:43:03 +0000 (0:00:00.359) 0:05:03.708 ****** 2026-02-17 03:43:06.631658 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.631667 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.631677 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.631687 | orchestrator | 2026-02-17 03:43:06.631696 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 03:43:06.631706 | orchestrator | Tuesday 17 February 2026 03:43:04 +0000 (0:00:00.647) 0:05:04.356 ****** 2026-02-17 03:43:06.631716 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:06.631725 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:06.631735 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:06.631811 | orchestrator | 2026-02-17 03:43:06.631822 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 03:43:06.631832 | orchestrator | Tuesday 17 February 2026 03:43:05 +0000 (0:00:00.806) 0:05:05.162 ****** 2026-02-17 03:43:06.631841 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:06.631851 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:06.631861 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:06.631870 | orchestrator | 2026-02-17 03:43:06.631880 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 03:43:06.631890 | orchestrator | Tuesday 17 February 2026 03:43:05 +0000 (0:00:00.854) 0:05:06.016 ****** 2026-02-17 03:43:06.631900 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:06.631915 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:06.631926 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:06.631936 | orchestrator | 2026-02-17 03:43:06.631946 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 03:43:06.631962 | orchestrator | Tuesday 17 February 2026 03:43:06 +0000 (0:00:00.635) 0:05:06.652 ****** 2026-02-17 03:43:39.280197 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:39.280285 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:39.280295 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:39.280303 | orchestrator | 2026-02-17 03:43:39.280312 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 03:43:39.280320 | orchestrator | Tuesday 17 February 2026 03:43:06 +0000 (0:00:00.353) 0:05:07.006 ****** 2026-02-17 03:43:39.280327 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.280349 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.280356 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.280363 | orchestrator | 2026-02-17 03:43:39.280370 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 03:43:39.280377 | orchestrator | Tuesday 17 February 2026 03:43:07 +0000 (0:00:00.346) 0:05:07.352 ****** 2026-02-17 03:43:39.280384 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.280391 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.280398 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.280405 | orchestrator | 2026-02-17 03:43:39.280432 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 03:43:39.280439 | orchestrator | Tuesday 17 February 2026 03:43:07 +0000 (0:00:00.349) 0:05:07.702 ****** 2026-02-17 03:43:39.280446 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.280453 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.280459 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.280466 | orchestrator | 2026-02-17 03:43:39.280472 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 03:43:39.280479 | orchestrator | Tuesday 17 February 2026 03:43:08 +0000 (0:00:00.628) 0:05:08.331 ****** 2026-02-17 03:43:39.280486 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.280493 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.280499 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.280506 | orchestrator | 2026-02-17 03:43:39.280513 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 03:43:39.280520 | orchestrator | Tuesday 17 February 2026 03:43:08 +0000 (0:00:00.422) 0:05:08.754 ****** 2026-02-17 03:43:39.280526 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.280533 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.280540 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.280547 | orchestrator | 2026-02-17 03:43:39.280553 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 03:43:39.280560 | orchestrator | Tuesday 17 February 2026 03:43:09 +0000 (0:00:00.348) 0:05:09.102 ****** 2026-02-17 03:43:39.280567 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:39.280573 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:39.280580 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:39.280587 | orchestrator | 2026-02-17 03:43:39.280593 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 03:43:39.280600 | orchestrator | Tuesday 17 February 2026 03:43:09 +0000 (0:00:00.406) 0:05:09.508 ****** 2026-02-17 03:43:39.280607 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:39.280614 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:39.280621 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:39.280627 | orchestrator | 2026-02-17 03:43:39.280634 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 03:43:39.280641 | orchestrator | Tuesday 17 February 2026 03:43:10 +0000 (0:00:00.663) 0:05:10.172 ****** 2026-02-17 03:43:39.280648 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:39.280655 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:39.280661 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:39.280668 | orchestrator | 2026-02-17 03:43:39.280674 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-17 03:43:39.280681 | orchestrator | Tuesday 17 February 2026 03:43:10 +0000 (0:00:00.645) 0:05:10.818 ****** 2026-02-17 03:43:39.280688 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 03:43:39.280696 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:43:39.280703 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:43:39.280710 | orchestrator | 2026-02-17 03:43:39.280717 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-17 03:43:39.280723 | orchestrator | Tuesday 17 February 2026 03:43:11 +0000 (0:00:00.976) 0:05:11.794 ****** 2026-02-17 03:43:39.280730 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:43:39.280738 | orchestrator | 2026-02-17 03:43:39.280744 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-17 03:43:39.280751 | orchestrator | Tuesday 17 February 2026 03:43:12 +0000 (0:00:00.860) 0:05:12.654 ****** 2026-02-17 03:43:39.280759 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:43:39.280767 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:43:39.280774 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:43:39.280834 | orchestrator | 2026-02-17 03:43:39.280843 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-17 03:43:39.280857 | orchestrator | Tuesday 17 February 2026 03:43:13 +0000 (0:00:00.772) 0:05:13.427 ****** 2026-02-17 03:43:39.280864 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.280872 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.280880 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.280888 | orchestrator | 2026-02-17 03:43:39.280895 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-17 03:43:39.280903 | orchestrator | Tuesday 17 February 2026 03:43:13 +0000 (0:00:00.399) 0:05:13.826 ****** 2026-02-17 03:43:39.280910 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 03:43:39.280918 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 03:43:39.280924 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 03:43:39.280932 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-17 03:43:39.280939 | orchestrator | 2026-02-17 03:43:39.280959 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-17 03:43:39.280966 | orchestrator | Tuesday 17 February 2026 03:43:24 +0000 (0:00:10.466) 0:05:24.293 ****** 2026-02-17 03:43:39.280972 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:39.280992 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:39.280999 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:39.281005 | orchestrator | 2026-02-17 03:43:39.281011 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-17 03:43:39.281017 | orchestrator | Tuesday 17 February 2026 03:43:24 +0000 (0:00:00.682) 0:05:24.976 ****** 2026-02-17 03:43:39.281023 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-17 03:43:39.281029 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-17 03:43:39.281036 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-17 03:43:39.281042 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-17 03:43:39.281048 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:43:39.281055 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:43:39.281061 | orchestrator | 2026-02-17 03:43:39.281067 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-17 03:43:39.281073 | orchestrator | Tuesday 17 February 2026 03:43:27 +0000 (0:00:02.219) 0:05:27.195 ****** 2026-02-17 03:43:39.281079 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-17 03:43:39.281086 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-17 03:43:39.281092 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-17 03:43:39.281098 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 03:43:39.281104 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-17 03:43:39.281110 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-17 03:43:39.281117 | orchestrator | 2026-02-17 03:43:39.281123 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-17 03:43:39.281129 | orchestrator | Tuesday 17 February 2026 03:43:28 +0000 (0:00:01.208) 0:05:28.404 ****** 2026-02-17 03:43:39.281135 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:43:39.281142 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:43:39.281148 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:43:39.281154 | orchestrator | 2026-02-17 03:43:39.281160 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-17 03:43:39.281166 | orchestrator | Tuesday 17 February 2026 03:43:29 +0000 (0:00:00.698) 0:05:29.102 ****** 2026-02-17 03:43:39.281173 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.281179 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.281185 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.281192 | orchestrator | 2026-02-17 03:43:39.281198 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-17 03:43:39.281204 | orchestrator | Tuesday 17 February 2026 03:43:29 +0000 (0:00:00.613) 0:05:29.716 ****** 2026-02-17 03:43:39.281217 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.281223 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.281229 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.281235 | orchestrator | 2026-02-17 03:43:39.281242 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-17 03:43:39.281248 | orchestrator | Tuesday 17 February 2026 03:43:30 +0000 (0:00:00.356) 0:05:30.073 ****** 2026-02-17 03:43:39.281254 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:43:39.281261 | orchestrator | 2026-02-17 03:43:39.281267 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-17 03:43:39.281273 | orchestrator | Tuesday 17 February 2026 03:43:30 +0000 (0:00:00.566) 0:05:30.639 ****** 2026-02-17 03:43:39.281279 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.281286 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.281292 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.281298 | orchestrator | 2026-02-17 03:43:39.281304 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-17 03:43:39.281311 | orchestrator | Tuesday 17 February 2026 03:43:31 +0000 (0:00:00.606) 0:05:31.246 ****** 2026-02-17 03:43:39.281317 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:43:39.281323 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:43:39.281329 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:43:39.281335 | orchestrator | 2026-02-17 03:43:39.281342 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-17 03:43:39.281348 | orchestrator | Tuesday 17 February 2026 03:43:31 +0000 (0:00:00.360) 0:05:31.606 ****** 2026-02-17 03:43:39.281354 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:43:39.281360 | orchestrator | 2026-02-17 03:43:39.281366 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-17 03:43:39.281373 | orchestrator | Tuesday 17 February 2026 03:43:32 +0000 (0:00:00.598) 0:05:32.204 ****** 2026-02-17 03:43:39.281379 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:43:39.281385 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:43:39.281391 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:43:39.281397 | orchestrator | 2026-02-17 03:43:39.281403 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-17 03:43:39.281410 | orchestrator | Tuesday 17 February 2026 03:43:34 +0000 (0:00:01.905) 0:05:34.110 ****** 2026-02-17 03:43:39.281416 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:43:39.281422 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:43:39.281428 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:43:39.281434 | orchestrator | 2026-02-17 03:43:39.281440 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-17 03:43:39.281447 | orchestrator | Tuesday 17 February 2026 03:43:35 +0000 (0:00:01.219) 0:05:35.329 ****** 2026-02-17 03:43:39.281453 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:43:39.281459 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:43:39.281465 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:43:39.281471 | orchestrator | 2026-02-17 03:43:39.281478 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-17 03:43:39.281488 | orchestrator | Tuesday 17 February 2026 03:43:37 +0000 (0:00:01.861) 0:05:37.191 ****** 2026-02-17 03:43:39.281494 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:43:39.281501 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:43:39.281507 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:43:39.281513 | orchestrator | 2026-02-17 03:43:39.281524 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-17 03:44:31.494808 | orchestrator | Tuesday 17 February 2026 03:43:39 +0000 (0:00:02.113) 0:05:39.304 ****** 2026-02-17 03:44:31.494938 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:44:31.494951 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:44:31.494960 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-17 03:44:31.494989 | orchestrator | 2026-02-17 03:44:31.494998 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-17 03:44:31.495005 | orchestrator | Tuesday 17 February 2026 03:43:40 +0000 (0:00:00.784) 0:05:40.089 ****** 2026-02-17 03:44:31.495013 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-17 03:44:31.495022 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-17 03:44:31.495029 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-17 03:44:31.495037 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-17 03:44:31.495044 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:44:31.495051 | orchestrator | 2026-02-17 03:44:31.495059 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-17 03:44:31.495066 | orchestrator | Tuesday 17 February 2026 03:44:04 +0000 (0:00:24.171) 0:06:04.260 ****** 2026-02-17 03:44:31.495073 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:44:31.495081 | orchestrator | 2026-02-17 03:44:31.495088 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-17 03:44:31.495095 | orchestrator | Tuesday 17 February 2026 03:44:05 +0000 (0:00:01.237) 0:06:05.497 ****** 2026-02-17 03:44:31.495102 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:44:31.495110 | orchestrator | 2026-02-17 03:44:31.495118 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-17 03:44:31.495125 | orchestrator | Tuesday 17 February 2026 03:44:05 +0000 (0:00:00.374) 0:06:05.872 ****** 2026-02-17 03:44:31.495132 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:44:31.495139 | orchestrator | 2026-02-17 03:44:31.495146 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-17 03:44:31.495154 | orchestrator | Tuesday 17 February 2026 03:44:05 +0000 (0:00:00.168) 0:06:06.040 ****** 2026-02-17 03:44:31.495161 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-17 03:44:31.495168 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-17 03:44:31.495175 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-17 03:44:31.495182 | orchestrator | 2026-02-17 03:44:31.495189 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-17 03:44:31.495197 | orchestrator | Tuesday 17 February 2026 03:44:12 +0000 (0:00:06.278) 0:06:12.319 ****** 2026-02-17 03:44:31.495204 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-17 03:44:31.495211 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-17 03:44:31.495218 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-17 03:44:31.495226 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-17 03:44:31.495233 | orchestrator | 2026-02-17 03:44:31.495240 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-17 03:44:31.495247 | orchestrator | Tuesday 17 February 2026 03:44:17 +0000 (0:00:05.477) 0:06:17.797 ****** 2026-02-17 03:44:31.495254 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:44:31.495262 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:44:31.495269 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:44:31.495276 | orchestrator | 2026-02-17 03:44:31.495284 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-17 03:44:31.495291 | orchestrator | Tuesday 17 February 2026 03:44:18 +0000 (0:00:00.698) 0:06:18.495 ****** 2026-02-17 03:44:31.495298 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:44:31.495305 | orchestrator | 2026-02-17 03:44:31.495317 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-17 03:44:31.495324 | orchestrator | Tuesday 17 February 2026 03:44:19 +0000 (0:00:00.834) 0:06:19.330 ****** 2026-02-17 03:44:31.495331 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:44:31.495339 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:44:31.495346 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:44:31.495353 | orchestrator | 2026-02-17 03:44:31.495362 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-17 03:44:31.495370 | orchestrator | Tuesday 17 February 2026 03:44:19 +0000 (0:00:00.433) 0:06:19.763 ****** 2026-02-17 03:44:31.495379 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:44:31.495387 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:44:31.495396 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:44:31.495404 | orchestrator | 2026-02-17 03:44:31.495412 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-17 03:44:31.495420 | orchestrator | Tuesday 17 February 2026 03:44:21 +0000 (0:00:01.303) 0:06:21.067 ****** 2026-02-17 03:44:31.495429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 03:44:31.495437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 03:44:31.495459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 03:44:31.495468 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:44:31.495476 | orchestrator | 2026-02-17 03:44:31.495485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-17 03:44:31.495493 | orchestrator | Tuesday 17 February 2026 03:44:21 +0000 (0:00:00.946) 0:06:22.013 ****** 2026-02-17 03:44:31.495516 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:44:31.495525 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:44:31.495533 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:44:31.495542 | orchestrator | 2026-02-17 03:44:31.495550 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-17 03:44:31.495558 | orchestrator | 2026-02-17 03:44:31.495568 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 03:44:31.495576 | orchestrator | Tuesday 17 February 2026 03:44:22 +0000 (0:00:00.891) 0:06:22.905 ****** 2026-02-17 03:44:31.495585 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:44:31.495594 | orchestrator | 2026-02-17 03:44:31.495602 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 03:44:31.495611 | orchestrator | Tuesday 17 February 2026 03:44:23 +0000 (0:00:00.568) 0:06:23.474 ****** 2026-02-17 03:44:31.495619 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:44:31.495627 | orchestrator | 2026-02-17 03:44:31.495636 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 03:44:31.495644 | orchestrator | Tuesday 17 February 2026 03:44:24 +0000 (0:00:00.824) 0:06:24.299 ****** 2026-02-17 03:44:31.495652 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:44:31.495661 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:44:31.495669 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:44:31.495677 | orchestrator | 2026-02-17 03:44:31.495685 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 03:44:31.495694 | orchestrator | Tuesday 17 February 2026 03:44:24 +0000 (0:00:00.357) 0:06:24.657 ****** 2026-02-17 03:44:31.495702 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:44:31.495710 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:44:31.495719 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:44:31.495726 | orchestrator | 2026-02-17 03:44:31.495733 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 03:44:31.495741 | orchestrator | Tuesday 17 February 2026 03:44:25 +0000 (0:00:00.710) 0:06:25.367 ****** 2026-02-17 03:44:31.495748 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:44:31.495756 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:44:31.495770 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:44:31.495777 | orchestrator | 2026-02-17 03:44:31.495785 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 03:44:31.495792 | orchestrator | Tuesday 17 February 2026 03:44:26 +0000 (0:00:00.702) 0:06:26.070 ****** 2026-02-17 03:44:31.495799 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:44:31.495807 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:44:31.495814 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:44:31.495821 | orchestrator | 2026-02-17 03:44:31.495829 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 03:44:31.495836 | orchestrator | Tuesday 17 February 2026 03:44:27 +0000 (0:00:01.096) 0:06:27.167 ****** 2026-02-17 03:44:31.495843 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:44:31.495851 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:44:31.495874 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:44:31.495882 | orchestrator | 2026-02-17 03:44:31.495889 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 03:44:31.495896 | orchestrator | Tuesday 17 February 2026 03:44:27 +0000 (0:00:00.397) 0:06:27.565 ****** 2026-02-17 03:44:31.495903 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:44:31.495911 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:44:31.495918 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:44:31.495925 | orchestrator | 2026-02-17 03:44:31.495933 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 03:44:31.495940 | orchestrator | Tuesday 17 February 2026 03:44:27 +0000 (0:00:00.378) 0:06:27.944 ****** 2026-02-17 03:44:31.495947 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:44:31.495955 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:44:31.495962 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:44:31.495969 | orchestrator | 2026-02-17 03:44:31.495977 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 03:44:31.495984 | orchestrator | Tuesday 17 February 2026 03:44:28 +0000 (0:00:00.340) 0:06:28.285 ****** 2026-02-17 03:44:31.495991 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:44:31.495999 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:44:31.496006 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:44:31.496013 | orchestrator | 2026-02-17 03:44:31.496020 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 03:44:31.496028 | orchestrator | Tuesday 17 February 2026 03:44:29 +0000 (0:00:01.076) 0:06:29.361 ****** 2026-02-17 03:44:31.496035 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:44:31.496042 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:44:31.496049 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:44:31.496056 | orchestrator | 2026-02-17 03:44:31.496064 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 03:44:31.496071 | orchestrator | Tuesday 17 February 2026 03:44:30 +0000 (0:00:00.773) 0:06:30.134 ****** 2026-02-17 03:44:31.496078 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:44:31.496086 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:44:31.496093 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:44:31.496100 | orchestrator | 2026-02-17 03:44:31.496107 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 03:44:31.496115 | orchestrator | Tuesday 17 February 2026 03:44:30 +0000 (0:00:00.355) 0:06:30.490 ****** 2026-02-17 03:44:31.496122 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:44:31.496129 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:44:31.496137 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:44:31.496144 | orchestrator | 2026-02-17 03:44:31.496151 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 03:44:31.496163 | orchestrator | Tuesday 17 February 2026 03:44:30 +0000 (0:00:00.349) 0:06:30.840 ****** 2026-02-17 03:44:31.496170 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:44:31.496177 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:44:31.496185 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:44:31.496192 | orchestrator | 2026-02-17 03:44:31.496204 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 03:44:31.496217 | orchestrator | Tuesday 17 February 2026 03:44:31 +0000 (0:00:00.672) 0:06:31.513 ****** 2026-02-17 03:45:31.160181 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:45:31.160326 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:45:31.160354 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:45:31.160373 | orchestrator | 2026-02-17 03:45:31.160393 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 03:45:31.160411 | orchestrator | Tuesday 17 February 2026 03:44:31 +0000 (0:00:00.377) 0:06:31.891 ****** 2026-02-17 03:45:31.160428 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:45:31.160446 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:45:31.160465 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:45:31.160482 | orchestrator | 2026-02-17 03:45:31.160500 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 03:45:31.160521 | orchestrator | Tuesday 17 February 2026 03:44:32 +0000 (0:00:00.364) 0:06:32.255 ****** 2026-02-17 03:45:31.160540 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:45:31.160635 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:45:31.160657 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:45:31.160676 | orchestrator | 2026-02-17 03:45:31.160696 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 03:45:31.160717 | orchestrator | Tuesday 17 February 2026 03:44:32 +0000 (0:00:00.341) 0:06:32.597 ****** 2026-02-17 03:45:31.160735 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:45:31.160753 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:45:31.160771 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:45:31.160790 | orchestrator | 2026-02-17 03:45:31.160811 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 03:45:31.160830 | orchestrator | Tuesday 17 February 2026 03:44:33 +0000 (0:00:00.639) 0:06:33.236 ****** 2026-02-17 03:45:31.160851 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:45:31.160871 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:45:31.160890 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:45:31.160908 | orchestrator | 2026-02-17 03:45:31.160928 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 03:45:31.161028 | orchestrator | Tuesday 17 February 2026 03:44:33 +0000 (0:00:00.364) 0:06:33.601 ****** 2026-02-17 03:45:31.161050 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:45:31.161068 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:45:31.161087 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:45:31.161106 | orchestrator | 2026-02-17 03:45:31.161124 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 03:45:31.161143 | orchestrator | Tuesday 17 February 2026 03:44:33 +0000 (0:00:00.356) 0:06:33.957 ****** 2026-02-17 03:45:31.161161 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:45:31.161179 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:45:31.161197 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:45:31.161214 | orchestrator | 2026-02-17 03:45:31.161232 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-17 03:45:31.161251 | orchestrator | Tuesday 17 February 2026 03:44:34 +0000 (0:00:00.878) 0:06:34.836 ****** 2026-02-17 03:45:31.161270 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:45:31.161288 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:45:31.161307 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:45:31.161321 | orchestrator | 2026-02-17 03:45:31.161332 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-17 03:45:31.161343 | orchestrator | Tuesday 17 February 2026 03:44:35 +0000 (0:00:00.364) 0:06:35.201 ****** 2026-02-17 03:45:31.161354 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:45:31.161366 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:45:31.161377 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:45:31.161420 | orchestrator | 2026-02-17 03:45:31.161432 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-17 03:45:31.161443 | orchestrator | Tuesday 17 February 2026 03:44:36 +0000 (0:00:00.941) 0:06:36.142 ****** 2026-02-17 03:45:31.161454 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:45:31.161466 | orchestrator | 2026-02-17 03:45:31.161477 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-17 03:45:31.161487 | orchestrator | Tuesday 17 February 2026 03:44:36 +0000 (0:00:00.833) 0:06:36.975 ****** 2026-02-17 03:45:31.161498 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:45:31.161509 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:45:31.161520 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:45:31.161531 | orchestrator | 2026-02-17 03:45:31.161540 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-17 03:45:31.161550 | orchestrator | Tuesday 17 February 2026 03:44:37 +0000 (0:00:00.348) 0:06:37.324 ****** 2026-02-17 03:45:31.161559 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:45:31.161569 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:45:31.161578 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:45:31.161588 | orchestrator | 2026-02-17 03:45:31.161597 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-17 03:45:31.161607 | orchestrator | Tuesday 17 February 2026 03:44:37 +0000 (0:00:00.362) 0:06:37.686 ****** 2026-02-17 03:45:31.161616 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:45:31.161626 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:45:31.161635 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:45:31.161645 | orchestrator | 2026-02-17 03:45:31.161654 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-17 03:45:31.161664 | orchestrator | Tuesday 17 February 2026 03:44:38 +0000 (0:00:00.643) 0:06:38.330 ****** 2026-02-17 03:45:31.161673 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:45:31.161683 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:45:31.161692 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:45:31.161701 | orchestrator | 2026-02-17 03:45:31.161724 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-17 03:45:31.161734 | orchestrator | Tuesday 17 February 2026 03:44:38 +0000 (0:00:00.701) 0:06:39.032 ****** 2026-02-17 03:45:31.161744 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-17 03:45:31.161778 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-17 03:45:31.161789 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-17 03:45:31.161799 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-17 03:45:31.161809 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-17 03:45:31.161819 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-17 03:45:31.161828 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-17 03:45:31.161838 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-17 03:45:31.161847 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-17 03:45:31.161857 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-17 03:45:31.161866 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-17 03:45:31.161876 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-17 03:45:31.161886 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-17 03:45:31.161895 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-17 03:45:31.161913 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-17 03:45:31.161923 | orchestrator | 2026-02-17 03:45:31.161958 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-17 03:45:31.161978 | orchestrator | Tuesday 17 February 2026 03:44:41 +0000 (0:00:02.055) 0:06:41.087 ****** 2026-02-17 03:45:31.161991 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:45:31.162001 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:45:31.162010 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:45:31.162075 | orchestrator | 2026-02-17 03:45:31.162086 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-17 03:45:31.162095 | orchestrator | Tuesday 17 February 2026 03:44:41 +0000 (0:00:00.320) 0:06:41.408 ****** 2026-02-17 03:45:31.162105 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:45:31.162115 | orchestrator | 2026-02-17 03:45:31.162124 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-17 03:45:31.162134 | orchestrator | Tuesday 17 February 2026 03:44:42 +0000 (0:00:00.921) 0:06:42.329 ****** 2026-02-17 03:45:31.162143 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-17 03:45:31.162153 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-17 03:45:31.162162 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-17 03:45:31.162172 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-17 03:45:31.162182 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-17 03:45:31.162192 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-17 03:45:31.162201 | orchestrator | 2026-02-17 03:45:31.162211 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-17 03:45:31.162220 | orchestrator | Tuesday 17 February 2026 03:44:43 +0000 (0:00:00.969) 0:06:43.299 ****** 2026-02-17 03:45:31.162229 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:45:31.162239 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 03:45:31.162249 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 03:45:31.162258 | orchestrator | 2026-02-17 03:45:31.162268 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-17 03:45:31.162277 | orchestrator | Tuesday 17 February 2026 03:44:45 +0000 (0:00:02.024) 0:06:45.324 ****** 2026-02-17 03:45:31.162287 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-17 03:45:31.162296 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 03:45:31.162306 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:45:31.162315 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-17 03:45:31.162325 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 03:45:31.162334 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:45:31.162344 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-17 03:45:31.162353 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-17 03:45:31.162363 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:45:31.162372 | orchestrator | 2026-02-17 03:45:31.162381 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-17 03:45:31.162391 | orchestrator | Tuesday 17 February 2026 03:44:46 +0000 (0:00:01.194) 0:06:46.518 ****** 2026-02-17 03:45:31.162400 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:45:31.162410 | orchestrator | 2026-02-17 03:45:31.162419 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-17 03:45:31.162433 | orchestrator | Tuesday 17 February 2026 03:44:48 +0000 (0:00:01.970) 0:06:48.488 ****** 2026-02-17 03:45:31.162469 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:45:31.162489 | orchestrator | 2026-02-17 03:45:31.162517 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-17 03:45:31.162536 | orchestrator | Tuesday 17 February 2026 03:44:49 +0000 (0:00:00.925) 0:06:49.414 ****** 2026-02-17 03:45:31.162566 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}) 2026-02-17 03:46:09.926358 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}) 2026-02-17 03:46:09.926464 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}) 2026-02-17 03:46:09.926478 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}) 2026-02-17 03:46:09.926489 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}) 2026-02-17 03:46:09.926499 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}) 2026-02-17 03:46:09.926509 | orchestrator | 2026-02-17 03:46:09.926520 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-17 03:46:09.926531 | orchestrator | Tuesday 17 February 2026 03:45:31 +0000 (0:00:41.772) 0:07:31.187 ****** 2026-02-17 03:46:09.926541 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.926553 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.926563 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:09.926573 | orchestrator | 2026-02-17 03:46:09.926583 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-17 03:46:09.926593 | orchestrator | Tuesday 17 February 2026 03:45:31 +0000 (0:00:00.331) 0:07:31.519 ****** 2026-02-17 03:46:09.926603 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:46:09.926614 | orchestrator | 2026-02-17 03:46:09.926623 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-17 03:46:09.926639 | orchestrator | Tuesday 17 February 2026 03:45:32 +0000 (0:00:00.868) 0:07:32.387 ****** 2026-02-17 03:46:09.926656 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:09.926675 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:09.926691 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:09.926708 | orchestrator | 2026-02-17 03:46:09.926725 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-17 03:46:09.926741 | orchestrator | Tuesday 17 February 2026 03:45:33 +0000 (0:00:00.735) 0:07:33.123 ****** 2026-02-17 03:46:09.926758 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:09.926774 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:09.926792 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:09.926807 | orchestrator | 2026-02-17 03:46:09.926817 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-17 03:46:09.926827 | orchestrator | Tuesday 17 February 2026 03:45:35 +0000 (0:00:02.699) 0:07:35.822 ****** 2026-02-17 03:46:09.926838 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:46:09.926849 | orchestrator | 2026-02-17 03:46:09.926859 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-17 03:46:09.926871 | orchestrator | Tuesday 17 February 2026 03:45:36 +0000 (0:00:00.861) 0:07:36.684 ****** 2026-02-17 03:46:09.926882 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:46:09.926895 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:46:09.926911 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:46:09.926929 | orchestrator | 2026-02-17 03:46:09.926944 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-17 03:46:09.926958 | orchestrator | Tuesday 17 February 2026 03:45:37 +0000 (0:00:01.255) 0:07:37.940 ****** 2026-02-17 03:46:09.927035 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:46:09.927054 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:46:09.927070 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:46:09.927086 | orchestrator | 2026-02-17 03:46:09.927104 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-17 03:46:09.927123 | orchestrator | Tuesday 17 February 2026 03:45:39 +0000 (0:00:01.225) 0:07:39.166 ****** 2026-02-17 03:46:09.927139 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:46:09.927157 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:46:09.927172 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:46:09.927184 | orchestrator | 2026-02-17 03:46:09.927196 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-17 03:46:09.927207 | orchestrator | Tuesday 17 February 2026 03:45:41 +0000 (0:00:02.096) 0:07:41.262 ****** 2026-02-17 03:46:09.927217 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.927227 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.927237 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:09.927247 | orchestrator | 2026-02-17 03:46:09.927257 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-17 03:46:09.927267 | orchestrator | Tuesday 17 February 2026 03:45:41 +0000 (0:00:00.403) 0:07:41.665 ****** 2026-02-17 03:46:09.927277 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.927287 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.927296 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:09.927306 | orchestrator | 2026-02-17 03:46:09.927316 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-17 03:46:09.927326 | orchestrator | Tuesday 17 February 2026 03:45:42 +0000 (0:00:00.386) 0:07:42.052 ****** 2026-02-17 03:46:09.927336 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-17 03:46:09.927361 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-17 03:46:09.927371 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-17 03:46:09.927381 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 03:46:09.927391 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-17 03:46:09.927400 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-17 03:46:09.927410 | orchestrator | 2026-02-17 03:46:09.927420 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-17 03:46:09.927450 | orchestrator | Tuesday 17 February 2026 03:45:43 +0000 (0:00:01.078) 0:07:43.130 ****** 2026-02-17 03:46:09.927461 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-17 03:46:09.927471 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-17 03:46:09.927481 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-17 03:46:09.927492 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-17 03:46:09.927501 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-17 03:46:09.927511 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-17 03:46:09.927521 | orchestrator | 2026-02-17 03:46:09.927532 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-17 03:46:09.927541 | orchestrator | Tuesday 17 February 2026 03:45:45 +0000 (0:00:02.646) 0:07:45.776 ****** 2026-02-17 03:46:09.927552 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-17 03:46:09.927561 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-17 03:46:09.927571 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-17 03:46:09.927581 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-17 03:46:09.927591 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-17 03:46:09.927600 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-17 03:46:09.927610 | orchestrator | 2026-02-17 03:46:09.927620 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-17 03:46:09.927630 | orchestrator | Tuesday 17 February 2026 03:45:49 +0000 (0:00:03.483) 0:07:49.259 ****** 2026-02-17 03:46:09.927640 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.927650 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.927678 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:46:09.927695 | orchestrator | 2026-02-17 03:46:09.927711 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-17 03:46:09.927726 | orchestrator | Tuesday 17 February 2026 03:45:52 +0000 (0:00:02.877) 0:07:52.137 ****** 2026-02-17 03:46:09.927743 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.927760 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.927777 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-17 03:46:09.927794 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:46:09.927808 | orchestrator | 2026-02-17 03:46:09.927818 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-17 03:46:09.927828 | orchestrator | Tuesday 17 February 2026 03:46:05 +0000 (0:00:13.111) 0:08:05.249 ****** 2026-02-17 03:46:09.927838 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.927848 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.927857 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:09.927867 | orchestrator | 2026-02-17 03:46:09.927877 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-17 03:46:09.927887 | orchestrator | Tuesday 17 February 2026 03:46:06 +0000 (0:00:00.975) 0:08:06.224 ****** 2026-02-17 03:46:09.927897 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.927906 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.927916 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:09.927926 | orchestrator | 2026-02-17 03:46:09.927935 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-17 03:46:09.927945 | orchestrator | Tuesday 17 February 2026 03:46:06 +0000 (0:00:00.697) 0:08:06.922 ****** 2026-02-17 03:46:09.927955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:46:09.927965 | orchestrator | 2026-02-17 03:46:09.927975 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-17 03:46:09.928039 | orchestrator | Tuesday 17 February 2026 03:46:07 +0000 (0:00:00.682) 0:08:07.605 ****** 2026-02-17 03:46:09.928049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:46:09.928059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:46:09.928069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:46:09.928079 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.928089 | orchestrator | 2026-02-17 03:46:09.928098 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-17 03:46:09.928108 | orchestrator | Tuesday 17 February 2026 03:46:07 +0000 (0:00:00.435) 0:08:08.040 ****** 2026-02-17 03:46:09.928118 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.928127 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.928137 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:09.928147 | orchestrator | 2026-02-17 03:46:09.928157 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-17 03:46:09.928166 | orchestrator | Tuesday 17 February 2026 03:46:08 +0000 (0:00:00.335) 0:08:08.376 ****** 2026-02-17 03:46:09.928176 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.928186 | orchestrator | 2026-02-17 03:46:09.928195 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-17 03:46:09.928205 | orchestrator | Tuesday 17 February 2026 03:46:08 +0000 (0:00:00.253) 0:08:08.630 ****** 2026-02-17 03:46:09.928215 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.928224 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:09.928234 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:09.928244 | orchestrator | 2026-02-17 03:46:09.928253 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-17 03:46:09.928263 | orchestrator | Tuesday 17 February 2026 03:46:09 +0000 (0:00:00.662) 0:08:09.292 ****** 2026-02-17 03:46:09.928281 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.928291 | orchestrator | 2026-02-17 03:46:09.928307 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-17 03:46:09.928317 | orchestrator | Tuesday 17 February 2026 03:46:09 +0000 (0:00:00.284) 0:08:09.577 ****** 2026-02-17 03:46:09.928327 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:09.928337 | orchestrator | 2026-02-17 03:46:09.928346 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-17 03:46:09.928356 | orchestrator | Tuesday 17 February 2026 03:46:09 +0000 (0:00:00.242) 0:08:09.820 ****** 2026-02-17 03:46:09.928373 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.225491 | orchestrator | 2026-02-17 03:46:30.225598 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-17 03:46:30.225612 | orchestrator | Tuesday 17 February 2026 03:46:09 +0000 (0:00:00.137) 0:08:09.957 ****** 2026-02-17 03:46:30.225619 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.225627 | orchestrator | 2026-02-17 03:46:30.225633 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-17 03:46:30.225640 | orchestrator | Tuesday 17 February 2026 03:46:10 +0000 (0:00:00.304) 0:08:10.262 ****** 2026-02-17 03:46:30.225646 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.225653 | orchestrator | 2026-02-17 03:46:30.225661 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-17 03:46:30.225667 | orchestrator | Tuesday 17 February 2026 03:46:10 +0000 (0:00:00.268) 0:08:10.530 ****** 2026-02-17 03:46:30.225673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:46:30.225681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:46:30.225688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:46:30.225695 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.225701 | orchestrator | 2026-02-17 03:46:30.225708 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-17 03:46:30.225714 | orchestrator | Tuesday 17 February 2026 03:46:10 +0000 (0:00:00.445) 0:08:10.975 ****** 2026-02-17 03:46:30.225721 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.225727 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.225734 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.225740 | orchestrator | 2026-02-17 03:46:30.225747 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-17 03:46:30.225753 | orchestrator | Tuesday 17 February 2026 03:46:11 +0000 (0:00:00.673) 0:08:11.649 ****** 2026-02-17 03:46:30.225760 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.225767 | orchestrator | 2026-02-17 03:46:30.225774 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-17 03:46:30.225781 | orchestrator | Tuesday 17 February 2026 03:46:11 +0000 (0:00:00.284) 0:08:11.933 ****** 2026-02-17 03:46:30.225788 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.225795 | orchestrator | 2026-02-17 03:46:30.225801 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-17 03:46:30.225808 | orchestrator | 2026-02-17 03:46:30.225815 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 03:46:30.225822 | orchestrator | Tuesday 17 February 2026 03:46:12 +0000 (0:00:00.729) 0:08:12.663 ****** 2026-02-17 03:46:30.225830 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:46:30.225839 | orchestrator | 2026-02-17 03:46:30.225845 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 03:46:30.225851 | orchestrator | Tuesday 17 February 2026 03:46:13 +0000 (0:00:01.338) 0:08:14.002 ****** 2026-02-17 03:46:30.225858 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:46:30.225887 | orchestrator | 2026-02-17 03:46:30.225894 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 03:46:30.225901 | orchestrator | Tuesday 17 February 2026 03:46:15 +0000 (0:00:01.444) 0:08:15.447 ****** 2026-02-17 03:46:30.225908 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.225915 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.225922 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.225929 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:46:30.225935 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:46:30.225941 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:46:30.225947 | orchestrator | 2026-02-17 03:46:30.225954 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 03:46:30.225960 | orchestrator | Tuesday 17 February 2026 03:46:16 +0000 (0:00:01.303) 0:08:16.751 ****** 2026-02-17 03:46:30.225966 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.225973 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.225980 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:30.225986 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:30.225993 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226000 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:30.226006 | orchestrator | 2026-02-17 03:46:30.226094 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 03:46:30.226101 | orchestrator | Tuesday 17 February 2026 03:46:17 +0000 (0:00:00.738) 0:08:17.489 ****** 2026-02-17 03:46:30.226108 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:30.226115 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:30.226122 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:30.226128 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226134 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226140 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226147 | orchestrator | 2026-02-17 03:46:30.226154 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 03:46:30.226162 | orchestrator | Tuesday 17 February 2026 03:46:18 +0000 (0:00:00.971) 0:08:18.461 ****** 2026-02-17 03:46:30.226169 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226177 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:30.226184 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226193 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:30.226201 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226208 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:30.226216 | orchestrator | 2026-02-17 03:46:30.226237 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 03:46:30.226247 | orchestrator | Tuesday 17 February 2026 03:46:19 +0000 (0:00:00.740) 0:08:19.201 ****** 2026-02-17 03:46:30.226254 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.226262 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.226269 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.226277 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:46:30.226285 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:46:30.226325 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:46:30.226341 | orchestrator | 2026-02-17 03:46:30.226347 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 03:46:30.226354 | orchestrator | Tuesday 17 February 2026 03:46:20 +0000 (0:00:01.318) 0:08:20.520 ****** 2026-02-17 03:46:30.226361 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.226368 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.226374 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.226380 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226387 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226393 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226400 | orchestrator | 2026-02-17 03:46:30.226406 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 03:46:30.226413 | orchestrator | Tuesday 17 February 2026 03:46:21 +0000 (0:00:00.664) 0:08:21.185 ****** 2026-02-17 03:46:30.226420 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.226435 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.226442 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.226449 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226455 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226462 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226469 | orchestrator | 2026-02-17 03:46:30.226476 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 03:46:30.226483 | orchestrator | Tuesday 17 February 2026 03:46:22 +0000 (0:00:00.929) 0:08:22.115 ****** 2026-02-17 03:46:30.226489 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:30.226496 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:30.226502 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:30.226509 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:46:30.226516 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:46:30.226523 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:46:30.226529 | orchestrator | 2026-02-17 03:46:30.226536 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 03:46:30.226543 | orchestrator | Tuesday 17 February 2026 03:46:23 +0000 (0:00:01.046) 0:08:23.161 ****** 2026-02-17 03:46:30.226549 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:30.226556 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:30.226571 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:30.226578 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:46:30.226592 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:46:30.226598 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:46:30.226605 | orchestrator | 2026-02-17 03:46:30.226611 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 03:46:30.226618 | orchestrator | Tuesday 17 February 2026 03:46:24 +0000 (0:00:01.419) 0:08:24.581 ****** 2026-02-17 03:46:30.226624 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.226631 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.226638 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.226645 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226651 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226657 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226664 | orchestrator | 2026-02-17 03:46:30.226670 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 03:46:30.226677 | orchestrator | Tuesday 17 February 2026 03:46:25 +0000 (0:00:00.706) 0:08:25.287 ****** 2026-02-17 03:46:30.226684 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.226690 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.226697 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.226703 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:46:30.226710 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:46:30.226716 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:46:30.226723 | orchestrator | 2026-02-17 03:46:30.226730 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 03:46:30.226736 | orchestrator | Tuesday 17 February 2026 03:46:26 +0000 (0:00:00.986) 0:08:26.273 ****** 2026-02-17 03:46:30.226743 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:30.226750 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:30.226756 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:30.226763 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226769 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226775 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226781 | orchestrator | 2026-02-17 03:46:30.226787 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 03:46:30.226794 | orchestrator | Tuesday 17 February 2026 03:46:26 +0000 (0:00:00.711) 0:08:26.984 ****** 2026-02-17 03:46:30.226801 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:30.226807 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:30.226814 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:30.226821 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226827 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226838 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226845 | orchestrator | 2026-02-17 03:46:30.226851 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 03:46:30.226857 | orchestrator | Tuesday 17 February 2026 03:46:27 +0000 (0:00:00.989) 0:08:27.974 ****** 2026-02-17 03:46:30.226864 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:46:30.226871 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:46:30.226878 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:46:30.226885 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226891 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226896 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226902 | orchestrator | 2026-02-17 03:46:30.226908 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 03:46:30.226915 | orchestrator | Tuesday 17 February 2026 03:46:28 +0000 (0:00:00.657) 0:08:28.632 ****** 2026-02-17 03:46:30.226921 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.226928 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.226935 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.226941 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.226948 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:46:30.226955 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:46:30.226961 | orchestrator | 2026-02-17 03:46:30.226968 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 03:46:30.226974 | orchestrator | Tuesday 17 February 2026 03:46:29 +0000 (0:00:00.976) 0:08:29.609 ****** 2026-02-17 03:46:30.226980 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:46:30.226986 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:46:30.226993 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:46:30.227000 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:46:30.227036 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:47:02.989406 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:47:02.989505 | orchestrator | 2026-02-17 03:47:02.989522 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 03:47:02.989535 | orchestrator | Tuesday 17 February 2026 03:46:30 +0000 (0:00:00.646) 0:08:30.255 ****** 2026-02-17 03:47:02.989546 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:02.989557 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:02.989568 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:02.989579 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:47:02.989590 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:47:02.989601 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:47:02.989612 | orchestrator | 2026-02-17 03:47:02.989624 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 03:47:02.989635 | orchestrator | Tuesday 17 February 2026 03:46:31 +0000 (0:00:00.958) 0:08:31.214 ****** 2026-02-17 03:47:02.989646 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.989656 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.989667 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.989712 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:47:02.989724 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:47:02.989735 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:47:02.989746 | orchestrator | 2026-02-17 03:47:02.989757 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 03:47:02.989768 | orchestrator | Tuesday 17 February 2026 03:46:31 +0000 (0:00:00.645) 0:08:31.859 ****** 2026-02-17 03:47:02.989779 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.989790 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.989801 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.989812 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:47:02.989823 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:47:02.989834 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:47:02.989845 | orchestrator | 2026-02-17 03:47:02.989856 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-17 03:47:02.989867 | orchestrator | Tuesday 17 February 2026 03:46:33 +0000 (0:00:01.435) 0:08:33.295 ****** 2026-02-17 03:47:02.989900 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:47:02.989912 | orchestrator | 2026-02-17 03:47:02.989923 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-17 03:47:02.989933 | orchestrator | Tuesday 17 February 2026 03:46:37 +0000 (0:00:04.295) 0:08:37.591 ****** 2026-02-17 03:47:02.989945 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:47:02.989958 | orchestrator | 2026-02-17 03:47:02.989971 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-17 03:47:02.989983 | orchestrator | Tuesday 17 February 2026 03:46:39 +0000 (0:00:02.020) 0:08:39.611 ****** 2026-02-17 03:47:02.989996 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:02.990009 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:02.990094 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:02.990108 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:47:02.990120 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:47:02.990133 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:47:02.990154 | orchestrator | 2026-02-17 03:47:02.990167 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-17 03:47:02.990180 | orchestrator | Tuesday 17 February 2026 03:46:41 +0000 (0:00:01.606) 0:08:41.218 ****** 2026-02-17 03:47:02.990193 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:02.990206 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:02.990218 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:02.990231 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:47:02.990243 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:47:02.990255 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:47:02.990268 | orchestrator | 2026-02-17 03:47:02.990280 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-17 03:47:02.990293 | orchestrator | Tuesday 17 February 2026 03:46:42 +0000 (0:00:01.382) 0:08:42.601 ****** 2026-02-17 03:47:02.990307 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:47:02.990320 | orchestrator | 2026-02-17 03:47:02.990331 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-17 03:47:02.990342 | orchestrator | Tuesday 17 February 2026 03:46:43 +0000 (0:00:01.436) 0:08:44.038 ****** 2026-02-17 03:47:02.990353 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:02.990364 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:02.990375 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:02.990386 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:47:02.990397 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:47:02.990408 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:47:02.990419 | orchestrator | 2026-02-17 03:47:02.990430 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-17 03:47:02.990441 | orchestrator | Tuesday 17 February 2026 03:46:45 +0000 (0:00:01.637) 0:08:45.675 ****** 2026-02-17 03:47:02.990452 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:02.990463 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:02.990474 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:02.990485 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:47:02.990495 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:47:02.990506 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:47:02.990517 | orchestrator | 2026-02-17 03:47:02.990528 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-17 03:47:02.990539 | orchestrator | Tuesday 17 February 2026 03:46:49 +0000 (0:00:03.784) 0:08:49.459 ****** 2026-02-17 03:47:02.990556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:47:02.990568 | orchestrator | 2026-02-17 03:47:02.990579 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-17 03:47:02.990598 | orchestrator | Tuesday 17 February 2026 03:46:50 +0000 (0:00:01.402) 0:08:50.862 ****** 2026-02-17 03:47:02.990609 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.990621 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.990632 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.990643 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:47:02.990671 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:47:02.990682 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:47:02.990693 | orchestrator | 2026-02-17 03:47:02.990704 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-17 03:47:02.990715 | orchestrator | Tuesday 17 February 2026 03:46:51 +0000 (0:00:00.668) 0:08:51.530 ****** 2026-02-17 03:47:02.990726 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:02.990737 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:02.990748 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:02.990759 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:47:02.990770 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:47:02.990782 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:47:02.990792 | orchestrator | 2026-02-17 03:47:02.990804 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-17 03:47:02.990815 | orchestrator | Tuesday 17 February 2026 03:46:53 +0000 (0:00:02.502) 0:08:54.033 ****** 2026-02-17 03:47:02.990826 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.990837 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.990848 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.990858 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:47:02.990869 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:47:02.990880 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:47:02.990891 | orchestrator | 2026-02-17 03:47:02.990902 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-17 03:47:02.990913 | orchestrator | 2026-02-17 03:47:02.990924 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 03:47:02.990935 | orchestrator | Tuesday 17 February 2026 03:46:55 +0000 (0:00:01.268) 0:08:55.302 ****** 2026-02-17 03:47:02.990946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:47:02.990957 | orchestrator | 2026-02-17 03:47:02.990969 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 03:47:02.990980 | orchestrator | Tuesday 17 February 2026 03:46:55 +0000 (0:00:00.590) 0:08:55.893 ****** 2026-02-17 03:47:02.990991 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:47:02.991002 | orchestrator | 2026-02-17 03:47:02.991012 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 03:47:02.991024 | orchestrator | Tuesday 17 February 2026 03:46:56 +0000 (0:00:00.806) 0:08:56.700 ****** 2026-02-17 03:47:02.991034 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:02.991045 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:02.991106 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:02.991118 | orchestrator | 2026-02-17 03:47:02.991129 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 03:47:02.991140 | orchestrator | Tuesday 17 February 2026 03:46:57 +0000 (0:00:00.388) 0:08:57.088 ****** 2026-02-17 03:47:02.991150 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.991161 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.991172 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.991183 | orchestrator | 2026-02-17 03:47:02.991194 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 03:47:02.991206 | orchestrator | Tuesday 17 February 2026 03:46:57 +0000 (0:00:00.783) 0:08:57.871 ****** 2026-02-17 03:47:02.991216 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.991227 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.991238 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.991249 | orchestrator | 2026-02-17 03:47:02.991260 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 03:47:02.991278 | orchestrator | Tuesday 17 February 2026 03:46:58 +0000 (0:00:00.798) 0:08:58.670 ****** 2026-02-17 03:47:02.991288 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.991299 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.991310 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.991321 | orchestrator | 2026-02-17 03:47:02.991332 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 03:47:02.991343 | orchestrator | Tuesday 17 February 2026 03:46:59 +0000 (0:00:01.216) 0:08:59.886 ****** 2026-02-17 03:47:02.991354 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:02.991365 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:02.991376 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:02.991387 | orchestrator | 2026-02-17 03:47:02.991398 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 03:47:02.991409 | orchestrator | Tuesday 17 February 2026 03:47:00 +0000 (0:00:00.374) 0:09:00.261 ****** 2026-02-17 03:47:02.991419 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:02.991430 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:02.991441 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:02.991452 | orchestrator | 2026-02-17 03:47:02.991463 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 03:47:02.991475 | orchestrator | Tuesday 17 February 2026 03:47:00 +0000 (0:00:00.345) 0:09:00.606 ****** 2026-02-17 03:47:02.991486 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:02.991497 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:02.991508 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:02.991520 | orchestrator | 2026-02-17 03:47:02.991531 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 03:47:02.991542 | orchestrator | Tuesday 17 February 2026 03:47:00 +0000 (0:00:00.340) 0:09:00.947 ****** 2026-02-17 03:47:02.991553 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.991564 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.991575 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.991586 | orchestrator | 2026-02-17 03:47:02.991597 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 03:47:02.991613 | orchestrator | Tuesday 17 February 2026 03:47:02 +0000 (0:00:01.096) 0:09:02.044 ****** 2026-02-17 03:47:02.991628 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:02.991647 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:02.991665 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:02.991685 | orchestrator | 2026-02-17 03:47:02.991703 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 03:47:02.991723 | orchestrator | Tuesday 17 February 2026 03:47:02 +0000 (0:00:00.691) 0:09:02.735 ****** 2026-02-17 03:47:02.991742 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:02.991756 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:02.991775 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:37.350286 | orchestrator | 2026-02-17 03:47:37.350543 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 03:47:37.350576 | orchestrator | Tuesday 17 February 2026 03:47:02 +0000 (0:00:00.281) 0:09:03.016 ****** 2026-02-17 03:47:37.350597 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:37.350618 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:37.350630 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:37.350642 | orchestrator | 2026-02-17 03:47:37.350653 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 03:47:37.350665 | orchestrator | Tuesday 17 February 2026 03:47:03 +0000 (0:00:00.293) 0:09:03.310 ****** 2026-02-17 03:47:37.350677 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:37.350689 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:37.350700 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:37.350711 | orchestrator | 2026-02-17 03:47:37.350722 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 03:47:37.350734 | orchestrator | Tuesday 17 February 2026 03:47:03 +0000 (0:00:00.594) 0:09:03.905 ****** 2026-02-17 03:47:37.350779 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:37.350792 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:37.350809 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:37.350829 | orchestrator | 2026-02-17 03:47:37.350847 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 03:47:37.350866 | orchestrator | Tuesday 17 February 2026 03:47:04 +0000 (0:00:00.381) 0:09:04.286 ****** 2026-02-17 03:47:37.350883 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:37.350901 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:37.350919 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:37.350938 | orchestrator | 2026-02-17 03:47:37.350956 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 03:47:37.350975 | orchestrator | Tuesday 17 February 2026 03:47:04 +0000 (0:00:00.359) 0:09:04.645 ****** 2026-02-17 03:47:37.350994 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:37.351014 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:37.351033 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:37.351052 | orchestrator | 2026-02-17 03:47:37.351072 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 03:47:37.351089 | orchestrator | Tuesday 17 February 2026 03:47:04 +0000 (0:00:00.319) 0:09:04.965 ****** 2026-02-17 03:47:37.351130 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:37.351141 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:37.351152 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:37.351163 | orchestrator | 2026-02-17 03:47:37.351174 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 03:47:37.351185 | orchestrator | Tuesday 17 February 2026 03:47:05 +0000 (0:00:00.704) 0:09:05.669 ****** 2026-02-17 03:47:37.351196 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:37.351207 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:37.351217 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:37.351228 | orchestrator | 2026-02-17 03:47:37.351239 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 03:47:37.351250 | orchestrator | Tuesday 17 February 2026 03:47:05 +0000 (0:00:00.354) 0:09:06.024 ****** 2026-02-17 03:47:37.351261 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:37.351272 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:37.351282 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:37.351293 | orchestrator | 2026-02-17 03:47:37.351304 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 03:47:37.351315 | orchestrator | Tuesday 17 February 2026 03:47:06 +0000 (0:00:00.386) 0:09:06.410 ****** 2026-02-17 03:47:37.351326 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:37.351336 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:37.351347 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:37.351358 | orchestrator | 2026-02-17 03:47:37.351368 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-17 03:47:37.351379 | orchestrator | Tuesday 17 February 2026 03:47:07 +0000 (0:00:00.882) 0:09:07.292 ****** 2026-02-17 03:47:37.351390 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:37.351401 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:37.351412 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-17 03:47:37.351424 | orchestrator | 2026-02-17 03:47:37.351435 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-17 03:47:37.351445 | orchestrator | Tuesday 17 February 2026 03:47:07 +0000 (0:00:00.458) 0:09:07.751 ****** 2026-02-17 03:47:37.351456 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:47:37.351467 | orchestrator | 2026-02-17 03:47:37.351478 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-17 03:47:37.351489 | orchestrator | Tuesday 17 February 2026 03:47:09 +0000 (0:00:02.114) 0:09:09.866 ****** 2026-02-17 03:47:37.351502 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-17 03:47:37.351527 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:37.351539 | orchestrator | 2026-02-17 03:47:37.351550 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-17 03:47:37.351561 | orchestrator | Tuesday 17 February 2026 03:47:10 +0000 (0:00:00.261) 0:09:10.127 ****** 2026-02-17 03:47:37.351589 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-17 03:47:37.351658 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-17 03:47:37.351671 | orchestrator | 2026-02-17 03:47:37.351682 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-17 03:47:37.351694 | orchestrator | Tuesday 17 February 2026 03:47:18 +0000 (0:00:08.650) 0:09:18.778 ****** 2026-02-17 03:47:37.351705 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 03:47:37.351716 | orchestrator | 2026-02-17 03:47:37.351727 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-17 03:47:37.351738 | orchestrator | Tuesday 17 February 2026 03:47:22 +0000 (0:00:04.053) 0:09:22.831 ****** 2026-02-17 03:47:37.351748 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:47:37.351761 | orchestrator | 2026-02-17 03:47:37.351771 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-17 03:47:37.351782 | orchestrator | Tuesday 17 February 2026 03:47:23 +0000 (0:00:00.582) 0:09:23.414 ****** 2026-02-17 03:47:37.351793 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-17 03:47:37.351804 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-17 03:47:37.351814 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-17 03:47:37.351826 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-17 03:47:37.351836 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-17 03:47:37.351847 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-17 03:47:37.351858 | orchestrator | 2026-02-17 03:47:37.351869 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-17 03:47:37.351880 | orchestrator | Tuesday 17 February 2026 03:47:24 +0000 (0:00:01.082) 0:09:24.497 ****** 2026-02-17 03:47:37.351890 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:47:37.351901 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 03:47:37.351912 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 03:47:37.351923 | orchestrator | 2026-02-17 03:47:37.351933 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-17 03:47:37.351944 | orchestrator | Tuesday 17 February 2026 03:47:26 +0000 (0:00:02.098) 0:09:26.596 ****** 2026-02-17 03:47:37.351955 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-17 03:47:37.351967 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 03:47:37.351978 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:37.351989 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-17 03:47:37.352000 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 03:47:37.352011 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:37.352021 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-17 03:47:37.352032 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-17 03:47:37.352051 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:37.352062 | orchestrator | 2026-02-17 03:47:37.352073 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-17 03:47:37.352084 | orchestrator | Tuesday 17 February 2026 03:47:28 +0000 (0:00:01.535) 0:09:28.132 ****** 2026-02-17 03:47:37.352114 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:37.352126 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:37.352137 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:37.352147 | orchestrator | 2026-02-17 03:47:37.352158 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-17 03:47:37.352169 | orchestrator | Tuesday 17 February 2026 03:47:30 +0000 (0:00:02.804) 0:09:30.937 ****** 2026-02-17 03:47:37.352180 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:37.352191 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:37.352202 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:37.352213 | orchestrator | 2026-02-17 03:47:37.352224 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-17 03:47:37.352234 | orchestrator | Tuesday 17 February 2026 03:47:31 +0000 (0:00:00.375) 0:09:31.312 ****** 2026-02-17 03:47:37.352245 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:47:37.352256 | orchestrator | 2026-02-17 03:47:37.352267 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-17 03:47:37.352278 | orchestrator | Tuesday 17 February 2026 03:47:32 +0000 (0:00:00.903) 0:09:32.216 ****** 2026-02-17 03:47:37.352288 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:47:37.352299 | orchestrator | 2026-02-17 03:47:37.352310 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-17 03:47:37.352321 | orchestrator | Tuesday 17 February 2026 03:47:32 +0000 (0:00:00.597) 0:09:32.813 ****** 2026-02-17 03:47:37.352332 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:37.352342 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:37.352353 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:37.352364 | orchestrator | 2026-02-17 03:47:37.352375 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-17 03:47:37.352392 | orchestrator | Tuesday 17 February 2026 03:47:34 +0000 (0:00:01.238) 0:09:34.051 ****** 2026-02-17 03:47:37.352403 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:37.352414 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:37.352425 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:37.352436 | orchestrator | 2026-02-17 03:47:37.352447 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-17 03:47:37.352458 | orchestrator | Tuesday 17 February 2026 03:47:35 +0000 (0:00:01.552) 0:09:35.604 ****** 2026-02-17 03:47:37.352469 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:37.352480 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:37.352490 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:37.352502 | orchestrator | 2026-02-17 03:47:37.352521 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-17 03:47:58.714997 | orchestrator | Tuesday 17 February 2026 03:47:37 +0000 (0:00:01.766) 0:09:37.370 ****** 2026-02-17 03:47:58.715104 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:58.715116 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:58.715157 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:58.715164 | orchestrator | 2026-02-17 03:47:58.715170 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-17 03:47:58.715176 | orchestrator | Tuesday 17 February 2026 03:47:39 +0000 (0:00:01.994) 0:09:39.365 ****** 2026-02-17 03:47:58.715182 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715188 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715193 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715198 | orchestrator | 2026-02-17 03:47:58.715204 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-17 03:47:58.715228 | orchestrator | Tuesday 17 February 2026 03:47:40 +0000 (0:00:01.619) 0:09:40.984 ****** 2026-02-17 03:47:58.715234 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:58.715239 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:58.715244 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:58.715249 | orchestrator | 2026-02-17 03:47:58.715255 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-17 03:47:58.715260 | orchestrator | Tuesday 17 February 2026 03:47:41 +0000 (0:00:00.701) 0:09:41.685 ****** 2026-02-17 03:47:58.715266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:47:58.715272 | orchestrator | 2026-02-17 03:47:58.715277 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-17 03:47:58.715282 | orchestrator | Tuesday 17 February 2026 03:47:42 +0000 (0:00:00.864) 0:09:42.549 ****** 2026-02-17 03:47:58.715287 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715292 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715297 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715302 | orchestrator | 2026-02-17 03:47:58.715307 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-17 03:47:58.715312 | orchestrator | Tuesday 17 February 2026 03:47:42 +0000 (0:00:00.358) 0:09:42.908 ****** 2026-02-17 03:47:58.715318 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:47:58.715323 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:47:58.715328 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:47:58.715333 | orchestrator | 2026-02-17 03:47:58.715338 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-17 03:47:58.715343 | orchestrator | Tuesday 17 February 2026 03:47:44 +0000 (0:00:01.259) 0:09:44.168 ****** 2026-02-17 03:47:58.715348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:47:58.715354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:47:58.715359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:47:58.715365 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.715370 | orchestrator | 2026-02-17 03:47:58.715375 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-17 03:47:58.715380 | orchestrator | Tuesday 17 February 2026 03:47:45 +0000 (0:00:01.000) 0:09:45.168 ****** 2026-02-17 03:47:58.715385 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715390 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715395 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715401 | orchestrator | 2026-02-17 03:47:58.715406 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-17 03:47:58.715411 | orchestrator | 2026-02-17 03:47:58.715416 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 03:47:58.715421 | orchestrator | Tuesday 17 February 2026 03:47:46 +0000 (0:00:00.921) 0:09:46.090 ****** 2026-02-17 03:47:58.715427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:47:58.715433 | orchestrator | 2026-02-17 03:47:58.715438 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 03:47:58.715443 | orchestrator | Tuesday 17 February 2026 03:47:46 +0000 (0:00:00.585) 0:09:46.676 ****** 2026-02-17 03:47:58.715448 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:47:58.715453 | orchestrator | 2026-02-17 03:47:58.715458 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 03:47:58.715463 | orchestrator | Tuesday 17 February 2026 03:47:47 +0000 (0:00:00.890) 0:09:47.567 ****** 2026-02-17 03:47:58.715468 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.715474 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.715479 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.715489 | orchestrator | 2026-02-17 03:47:58.715494 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 03:47:58.715499 | orchestrator | Tuesday 17 February 2026 03:47:47 +0000 (0:00:00.337) 0:09:47.904 ****** 2026-02-17 03:47:58.715504 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715509 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715514 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715519 | orchestrator | 2026-02-17 03:47:58.715524 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 03:47:58.715529 | orchestrator | Tuesday 17 February 2026 03:47:48 +0000 (0:00:00.721) 0:09:48.626 ****** 2026-02-17 03:47:58.715534 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715551 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715557 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715563 | orchestrator | 2026-02-17 03:47:58.715569 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 03:47:58.715575 | orchestrator | Tuesday 17 February 2026 03:47:49 +0000 (0:00:01.056) 0:09:49.682 ****** 2026-02-17 03:47:58.715581 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715587 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715592 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715598 | orchestrator | 2026-02-17 03:47:58.715604 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 03:47:58.715610 | orchestrator | Tuesday 17 February 2026 03:47:50 +0000 (0:00:00.745) 0:09:50.428 ****** 2026-02-17 03:47:58.715628 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.715634 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.715640 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.715646 | orchestrator | 2026-02-17 03:47:58.715652 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 03:47:58.715657 | orchestrator | Tuesday 17 February 2026 03:47:50 +0000 (0:00:00.346) 0:09:50.775 ****** 2026-02-17 03:47:58.715664 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.715670 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.715675 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.715681 | orchestrator | 2026-02-17 03:47:58.715687 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 03:47:58.715693 | orchestrator | Tuesday 17 February 2026 03:47:51 +0000 (0:00:00.339) 0:09:51.115 ****** 2026-02-17 03:47:58.715698 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.715704 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.715710 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.715716 | orchestrator | 2026-02-17 03:47:58.715722 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 03:47:58.715728 | orchestrator | Tuesday 17 February 2026 03:47:51 +0000 (0:00:00.648) 0:09:51.763 ****** 2026-02-17 03:47:58.715734 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715740 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715745 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715751 | orchestrator | 2026-02-17 03:47:58.715757 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 03:47:58.715762 | orchestrator | Tuesday 17 February 2026 03:47:52 +0000 (0:00:00.763) 0:09:52.527 ****** 2026-02-17 03:47:58.715768 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715774 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715805 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715811 | orchestrator | 2026-02-17 03:47:58.715817 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 03:47:58.715829 | orchestrator | Tuesday 17 February 2026 03:47:53 +0000 (0:00:00.763) 0:09:53.291 ****** 2026-02-17 03:47:58.715835 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.715847 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.715853 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.715866 | orchestrator | 2026-02-17 03:47:58.715871 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 03:47:58.715882 | orchestrator | Tuesday 17 February 2026 03:47:53 +0000 (0:00:00.329) 0:09:53.621 ****** 2026-02-17 03:47:58.715888 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.715894 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.715900 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.715906 | orchestrator | 2026-02-17 03:47:58.715912 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 03:47:58.715918 | orchestrator | Tuesday 17 February 2026 03:47:54 +0000 (0:00:00.653) 0:09:54.274 ****** 2026-02-17 03:47:58.715923 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715929 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715934 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715939 | orchestrator | 2026-02-17 03:47:58.715945 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 03:47:58.715950 | orchestrator | Tuesday 17 February 2026 03:47:54 +0000 (0:00:00.361) 0:09:54.636 ****** 2026-02-17 03:47:58.715955 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715960 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715965 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.715970 | orchestrator | 2026-02-17 03:47:58.715975 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 03:47:58.715980 | orchestrator | Tuesday 17 February 2026 03:47:54 +0000 (0:00:00.352) 0:09:54.988 ****** 2026-02-17 03:47:58.715985 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.715990 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.715995 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.716000 | orchestrator | 2026-02-17 03:47:58.716005 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 03:47:58.716011 | orchestrator | Tuesday 17 February 2026 03:47:55 +0000 (0:00:00.367) 0:09:55.356 ****** 2026-02-17 03:47:58.716016 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.716021 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.716026 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.716031 | orchestrator | 2026-02-17 03:47:58.716036 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 03:47:58.716041 | orchestrator | Tuesday 17 February 2026 03:47:55 +0000 (0:00:00.626) 0:09:55.982 ****** 2026-02-17 03:47:58.716046 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.716051 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.716057 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.716062 | orchestrator | 2026-02-17 03:47:58.716067 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 03:47:58.716072 | orchestrator | Tuesday 17 February 2026 03:47:56 +0000 (0:00:00.344) 0:09:56.327 ****** 2026-02-17 03:47:58.716077 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:47:58.716082 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:47:58.716087 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:47:58.716092 | orchestrator | 2026-02-17 03:47:58.716097 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 03:47:58.716102 | orchestrator | Tuesday 17 February 2026 03:47:56 +0000 (0:00:00.330) 0:09:56.657 ****** 2026-02-17 03:47:58.716108 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.716113 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.716118 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.716139 | orchestrator | 2026-02-17 03:47:58.716148 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 03:47:58.716153 | orchestrator | Tuesday 17 February 2026 03:47:56 +0000 (0:00:00.370) 0:09:57.027 ****** 2026-02-17 03:47:58.716159 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:47:58.716164 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:47:58.716169 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:47:58.716174 | orchestrator | 2026-02-17 03:47:58.716179 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-17 03:47:58.716184 | orchestrator | Tuesday 17 February 2026 03:47:57 +0000 (0:00:00.903) 0:09:57.931 ****** 2026-02-17 03:47:58.716198 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:48:46.222593 | orchestrator | 2026-02-17 03:48:46.222741 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-17 03:48:46.222771 | orchestrator | Tuesday 17 February 2026 03:47:58 +0000 (0:00:00.807) 0:09:58.738 ****** 2026-02-17 03:48:46.222790 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:48:46.222810 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 03:48:46.222829 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 03:48:46.222847 | orchestrator | 2026-02-17 03:48:46.222866 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-17 03:48:46.222884 | orchestrator | Tuesday 17 February 2026 03:48:00 +0000 (0:00:02.168) 0:10:00.906 ****** 2026-02-17 03:48:46.222902 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-17 03:48:46.222922 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 03:48:46.222940 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:48:46.222959 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-17 03:48:46.222977 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 03:48:46.222997 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:48:46.223016 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-17 03:48:46.223034 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-17 03:48:46.223046 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:48:46.223057 | orchestrator | 2026-02-17 03:48:46.223068 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-17 03:48:46.223080 | orchestrator | Tuesday 17 February 2026 03:48:02 +0000 (0:00:01.229) 0:10:02.136 ****** 2026-02-17 03:48:46.223094 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:48:46.223107 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:48:46.223119 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:48:46.223131 | orchestrator | 2026-02-17 03:48:46.223143 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-17 03:48:46.223156 | orchestrator | Tuesday 17 February 2026 03:48:02 +0000 (0:00:00.352) 0:10:02.488 ****** 2026-02-17 03:48:46.223170 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:48:46.223183 | orchestrator | 2026-02-17 03:48:46.223233 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-17 03:48:46.223245 | orchestrator | Tuesday 17 February 2026 03:48:03 +0000 (0:00:00.848) 0:10:03.337 ****** 2026-02-17 03:48:46.223260 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 03:48:46.223278 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 03:48:46.223296 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 03:48:46.223314 | orchestrator | 2026-02-17 03:48:46.223331 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-17 03:48:46.223350 | orchestrator | Tuesday 17 February 2026 03:48:04 +0000 (0:00:00.828) 0:10:04.165 ****** 2026-02-17 03:48:46.223367 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:48:46.223386 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-17 03:48:46.223406 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:48:46.223424 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-17 03:48:46.223473 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:48:46.223485 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-17 03:48:46.223497 | orchestrator | 2026-02-17 03:48:46.223509 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-17 03:48:46.223527 | orchestrator | Tuesday 17 February 2026 03:48:08 +0000 (0:00:04.209) 0:10:08.374 ****** 2026-02-17 03:48:46.223544 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:48:46.223562 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 03:48:46.223580 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:48:46.223598 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 03:48:46.223617 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:48:46.223654 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 03:48:46.223666 | orchestrator | 2026-02-17 03:48:46.223677 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-17 03:48:46.223688 | orchestrator | Tuesday 17 February 2026 03:48:10 +0000 (0:00:02.206) 0:10:10.581 ****** 2026-02-17 03:48:46.223699 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-17 03:48:46.223710 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:48:46.223721 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-17 03:48:46.223732 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:48:46.223743 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-17 03:48:46.223754 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:48:46.223765 | orchestrator | 2026-02-17 03:48:46.223799 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-17 03:48:46.223811 | orchestrator | Tuesday 17 February 2026 03:48:12 +0000 (0:00:01.658) 0:10:12.239 ****** 2026-02-17 03:48:46.223822 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-17 03:48:46.223833 | orchestrator | 2026-02-17 03:48:46.223844 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-17 03:48:46.223855 | orchestrator | Tuesday 17 February 2026 03:48:12 +0000 (0:00:00.252) 0:10:12.492 ****** 2026-02-17 03:48:46.223866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.223878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.223889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.223900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.223911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.223922 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:48:46.223933 | orchestrator | 2026-02-17 03:48:46.223944 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-17 03:48:46.223955 | orchestrator | Tuesday 17 February 2026 03:48:13 +0000 (0:00:00.673) 0:10:13.165 ****** 2026-02-17 03:48:46.223966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.223977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.223988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.224009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.224020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 03:48:46.224031 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:48:46.224043 | orchestrator | 2026-02-17 03:48:46.224054 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-17 03:48:46.224065 | orchestrator | Tuesday 17 February 2026 03:48:13 +0000 (0:00:00.625) 0:10:13.790 ****** 2026-02-17 03:48:46.224081 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 03:48:46.224099 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 03:48:46.224117 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 03:48:46.224134 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 03:48:46.224152 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 03:48:46.224169 | orchestrator | 2026-02-17 03:48:46.224217 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-17 03:48:46.224237 | orchestrator | Tuesday 17 February 2026 03:48:43 +0000 (0:00:30.116) 0:10:43.907 ****** 2026-02-17 03:48:46.224254 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:48:46.224272 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:48:46.224289 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:48:46.224307 | orchestrator | 2026-02-17 03:48:46.224327 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-17 03:48:46.224345 | orchestrator | Tuesday 17 February 2026 03:48:44 +0000 (0:00:00.317) 0:10:44.224 ****** 2026-02-17 03:48:46.224364 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:48:46.224378 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:48:46.224390 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:48:46.224401 | orchestrator | 2026-02-17 03:48:46.224419 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-17 03:48:46.224430 | orchestrator | Tuesday 17 February 2026 03:48:44 +0000 (0:00:00.643) 0:10:44.867 ****** 2026-02-17 03:48:46.224441 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:48:46.224452 | orchestrator | 2026-02-17 03:48:46.224463 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-17 03:48:46.224474 | orchestrator | Tuesday 17 February 2026 03:48:45 +0000 (0:00:00.586) 0:10:45.454 ****** 2026-02-17 03:48:46.224494 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:48:57.425914 | orchestrator | 2026-02-17 03:48:57.426098 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-17 03:48:57.426121 | orchestrator | Tuesday 17 February 2026 03:48:46 +0000 (0:00:00.791) 0:10:46.245 ****** 2026-02-17 03:48:57.426135 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:48:57.426151 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:48:57.426165 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:48:57.426179 | orchestrator | 2026-02-17 03:48:57.426194 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-17 03:48:57.426293 | orchestrator | Tuesday 17 February 2026 03:48:47 +0000 (0:00:01.333) 0:10:47.579 ****** 2026-02-17 03:48:57.426343 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:48:57.426358 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:48:57.426372 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:48:57.426387 | orchestrator | 2026-02-17 03:48:57.426401 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-17 03:48:57.426414 | orchestrator | Tuesday 17 February 2026 03:48:48 +0000 (0:00:01.181) 0:10:48.761 ****** 2026-02-17 03:48:57.426427 | orchestrator | changed: [testbed-node-4] 2026-02-17 03:48:57.426442 | orchestrator | changed: [testbed-node-3] 2026-02-17 03:48:57.426455 | orchestrator | changed: [testbed-node-5] 2026-02-17 03:48:57.426469 | orchestrator | 2026-02-17 03:48:57.426482 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-17 03:48:57.426496 | orchestrator | Tuesday 17 February 2026 03:48:50 +0000 (0:00:01.734) 0:10:50.495 ****** 2026-02-17 03:48:57.426511 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 03:48:57.426528 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 03:48:57.426541 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 03:48:57.426554 | orchestrator | 2026-02-17 03:48:57.426567 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-17 03:48:57.426580 | orchestrator | Tuesday 17 February 2026 03:48:53 +0000 (0:00:02.749) 0:10:53.245 ****** 2026-02-17 03:48:57.426593 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:48:57.426606 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:48:57.426619 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:48:57.426631 | orchestrator | 2026-02-17 03:48:57.426644 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-17 03:48:57.426657 | orchestrator | Tuesday 17 February 2026 03:48:53 +0000 (0:00:00.393) 0:10:53.638 ****** 2026-02-17 03:48:57.426673 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:48:57.426686 | orchestrator | 2026-02-17 03:48:57.426699 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-17 03:48:57.426712 | orchestrator | Tuesday 17 February 2026 03:48:54 +0000 (0:00:00.963) 0:10:54.601 ****** 2026-02-17 03:48:57.426726 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:48:57.426740 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:48:57.426755 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:48:57.426768 | orchestrator | 2026-02-17 03:48:57.426781 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-17 03:48:57.426793 | orchestrator | Tuesday 17 February 2026 03:48:54 +0000 (0:00:00.376) 0:10:54.977 ****** 2026-02-17 03:48:57.426801 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:48:57.426809 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:48:57.426817 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:48:57.426825 | orchestrator | 2026-02-17 03:48:57.426833 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-17 03:48:57.426841 | orchestrator | Tuesday 17 February 2026 03:48:55 +0000 (0:00:00.371) 0:10:55.349 ****** 2026-02-17 03:48:57.426849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:48:57.426858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:48:57.426866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:48:57.426874 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:48:57.426882 | orchestrator | 2026-02-17 03:48:57.426890 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-17 03:48:57.426898 | orchestrator | Tuesday 17 February 2026 03:48:56 +0000 (0:00:01.237) 0:10:56.586 ****** 2026-02-17 03:48:57.426907 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:48:57.426915 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:48:57.426934 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:48:57.426943 | orchestrator | 2026-02-17 03:48:57.426951 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:48:57.426959 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-17 03:48:57.426984 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-17 03:48:57.426993 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-17 03:48:57.427004 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-17 03:48:57.427017 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-17 03:48:57.427056 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-17 03:48:57.427071 | orchestrator | 2026-02-17 03:48:57.427086 | orchestrator | 2026-02-17 03:48:57.427094 | orchestrator | 2026-02-17 03:48:57.427102 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:48:57.427110 | orchestrator | Tuesday 17 February 2026 03:48:56 +0000 (0:00:00.260) 0:10:56.847 ****** 2026-02-17 03:48:57.427118 | orchestrator | =============================================================================== 2026-02-17 03:48:57.427126 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 56.48s 2026-02-17 03:48:57.427134 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.77s 2026-02-17 03:48:57.427142 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.12s 2026-02-17 03:48:57.427150 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.17s 2026-02-17 03:48:57.427158 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.85s 2026-02-17 03:48:57.427166 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.75s 2026-02-17 03:48:57.427174 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.11s 2026-02-17 03:48:57.427182 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.47s 2026-02-17 03:48:57.427190 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.06s 2026-02-17 03:48:57.427198 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.65s 2026-02-17 03:48:57.427238 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.29s 2026-02-17 03:48:57.427254 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.28s 2026-02-17 03:48:57.427268 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.48s 2026-02-17 03:48:57.427280 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.30s 2026-02-17 03:48:57.427295 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.21s 2026-02-17 03:48:57.427305 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.05s 2026-02-17 03:48:57.427313 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.78s 2026-02-17 03:48:57.427321 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.48s 2026-02-17 03:48:57.427328 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.29s 2026-02-17 03:48:57.427336 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.28s 2026-02-17 03:49:00.001811 | orchestrator | 2026-02-17 03:48:59 | INFO  | Task 261f5cd1-3ffc-4b3e-af8f-5f99600319a0 (ceph-pools) was prepared for execution. 2026-02-17 03:49:00.002332 | orchestrator | 2026-02-17 03:48:59 | INFO  | It takes a moment until task 261f5cd1-3ffc-4b3e-af8f-5f99600319a0 (ceph-pools) has been started and output is visible here. 2026-02-17 03:49:15.916977 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-17 03:49:15.917123 | orchestrator | 2.16.14 2026-02-17 03:49:15.917148 | orchestrator | 2026-02-17 03:49:15.917170 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-17 03:49:15.917190 | orchestrator | 2026-02-17 03:49:15.917210 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 03:49:15.917231 | orchestrator | Tuesday 17 February 2026 03:49:05 +0000 (0:00:00.680) 0:00:00.680 ****** 2026-02-17 03:49:15.917359 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:49:15.917381 | orchestrator | 2026-02-17 03:49:15.917398 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 03:49:15.917415 | orchestrator | Tuesday 17 February 2026 03:49:05 +0000 (0:00:00.733) 0:00:01.414 ****** 2026-02-17 03:49:15.917435 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:15.917454 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:15.917473 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:15.917493 | orchestrator | 2026-02-17 03:49:15.917515 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 03:49:15.917533 | orchestrator | Tuesday 17 February 2026 03:49:06 +0000 (0:00:00.705) 0:00:02.119 ****** 2026-02-17 03:49:15.917552 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:15.917572 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:15.917585 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:15.917598 | orchestrator | 2026-02-17 03:49:15.917610 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 03:49:15.917623 | orchestrator | Tuesday 17 February 2026 03:49:06 +0000 (0:00:00.455) 0:00:02.574 ****** 2026-02-17 03:49:15.917636 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:15.917648 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:15.917661 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:15.917673 | orchestrator | 2026-02-17 03:49:15.917704 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 03:49:15.917717 | orchestrator | Tuesday 17 February 2026 03:49:07 +0000 (0:00:00.972) 0:00:03.547 ****** 2026-02-17 03:49:15.917729 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:15.917742 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:15.917754 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:15.917766 | orchestrator | 2026-02-17 03:49:15.917779 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 03:49:15.917791 | orchestrator | Tuesday 17 February 2026 03:49:08 +0000 (0:00:00.328) 0:00:03.875 ****** 2026-02-17 03:49:15.917803 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:15.917814 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:15.917825 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:15.917836 | orchestrator | 2026-02-17 03:49:15.917847 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 03:49:15.917858 | orchestrator | Tuesday 17 February 2026 03:49:08 +0000 (0:00:00.331) 0:00:04.207 ****** 2026-02-17 03:49:15.917868 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:15.917879 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:15.917890 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:15.917901 | orchestrator | 2026-02-17 03:49:15.917913 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 03:49:15.917924 | orchestrator | Tuesday 17 February 2026 03:49:08 +0000 (0:00:00.379) 0:00:04.586 ****** 2026-02-17 03:49:15.917935 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:15.917947 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:15.917958 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:15.917969 | orchestrator | 2026-02-17 03:49:15.917980 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 03:49:15.918015 | orchestrator | Tuesday 17 February 2026 03:49:09 +0000 (0:00:00.576) 0:00:05.163 ****** 2026-02-17 03:49:15.918095 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:15.918106 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:15.918117 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:15.918128 | orchestrator | 2026-02-17 03:49:15.918139 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 03:49:15.918150 | orchestrator | Tuesday 17 February 2026 03:49:09 +0000 (0:00:00.354) 0:00:05.517 ****** 2026-02-17 03:49:15.918161 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:49:15.918172 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:49:15.918183 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:49:15.918193 | orchestrator | 2026-02-17 03:49:15.918204 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 03:49:15.918215 | orchestrator | Tuesday 17 February 2026 03:49:10 +0000 (0:00:00.768) 0:00:06.286 ****** 2026-02-17 03:49:15.918226 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:15.918265 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:15.918277 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:15.918288 | orchestrator | 2026-02-17 03:49:15.918298 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 03:49:15.918309 | orchestrator | Tuesday 17 February 2026 03:49:11 +0000 (0:00:00.502) 0:00:06.788 ****** 2026-02-17 03:49:15.918320 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:49:15.918331 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:49:15.918342 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:49:15.918353 | orchestrator | 2026-02-17 03:49:15.918364 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 03:49:15.918375 | orchestrator | Tuesday 17 February 2026 03:49:13 +0000 (0:00:02.393) 0:00:09.182 ****** 2026-02-17 03:49:15.918387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 03:49:15.918399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 03:49:15.918410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 03:49:15.918421 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:15.918432 | orchestrator | 2026-02-17 03:49:15.918465 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 03:49:15.918477 | orchestrator | Tuesday 17 February 2026 03:49:14 +0000 (0:00:00.726) 0:00:09.908 ****** 2026-02-17 03:49:15.918491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 03:49:15.918506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 03:49:15.918517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 03:49:15.918528 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:15.918540 | orchestrator | 2026-02-17 03:49:15.918551 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 03:49:15.918562 | orchestrator | Tuesday 17 February 2026 03:49:15 +0000 (0:00:01.245) 0:00:11.153 ****** 2026-02-17 03:49:15.918583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:15.918607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:15.918619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:15.918630 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:15.918641 | orchestrator | 2026-02-17 03:49:15.918652 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 03:49:15.918664 | orchestrator | Tuesday 17 February 2026 03:49:15 +0000 (0:00:00.175) 0:00:11.329 ****** 2026-02-17 03:49:15.918677 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6b2dae68d29f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 03:49:12.120031', 'end': '2026-02-17 03:49:12.160164', 'delta': '0:00:00.040133', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b2dae68d29f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 03:49:15.918692 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5939893342f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 03:49:12.745970', 'end': '2026-02-17 03:49:12.797169', 'delta': '0:00:00.051199', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5939893342f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 03:49:15.918712 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4f72f9ce519e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 03:49:13.318204', 'end': '2026-02-17 03:49:13.360020', 'delta': '0:00:00.041816', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f72f9ce519e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 03:49:22.901833 | orchestrator | 2026-02-17 03:49:22.901933 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 03:49:22.901951 | orchestrator | Tuesday 17 February 2026 03:49:15 +0000 (0:00:00.200) 0:00:11.529 ****** 2026-02-17 03:49:22.901984 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:22.901997 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:22.902008 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:22.902059 | orchestrator | 2026-02-17 03:49:22.902071 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 03:49:22.902082 | orchestrator | Tuesday 17 February 2026 03:49:16 +0000 (0:00:00.468) 0:00:11.998 ****** 2026-02-17 03:49:22.902094 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-17 03:49:22.902106 | orchestrator | 2026-02-17 03:49:22.902134 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 03:49:22.902150 | orchestrator | Tuesday 17 February 2026 03:49:18 +0000 (0:00:01.657) 0:00:13.655 ****** 2026-02-17 03:49:22.902161 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902173 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902184 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902195 | orchestrator | 2026-02-17 03:49:22.902205 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 03:49:22.902217 | orchestrator | Tuesday 17 February 2026 03:49:18 +0000 (0:00:00.337) 0:00:13.992 ****** 2026-02-17 03:49:22.902230 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902276 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902289 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902300 | orchestrator | 2026-02-17 03:49:22.902311 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 03:49:22.902322 | orchestrator | Tuesday 17 February 2026 03:49:19 +0000 (0:00:00.701) 0:00:14.693 ****** 2026-02-17 03:49:22.902333 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902344 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902355 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902366 | orchestrator | 2026-02-17 03:49:22.902378 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 03:49:22.902389 | orchestrator | Tuesday 17 February 2026 03:49:19 +0000 (0:00:00.317) 0:00:15.011 ****** 2026-02-17 03:49:22.902400 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:22.902411 | orchestrator | 2026-02-17 03:49:22.902421 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 03:49:22.902432 | orchestrator | Tuesday 17 February 2026 03:49:19 +0000 (0:00:00.162) 0:00:15.174 ****** 2026-02-17 03:49:22.902443 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902454 | orchestrator | 2026-02-17 03:49:22.902465 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 03:49:22.902475 | orchestrator | Tuesday 17 February 2026 03:49:19 +0000 (0:00:00.266) 0:00:15.440 ****** 2026-02-17 03:49:22.902486 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902498 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902508 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902519 | orchestrator | 2026-02-17 03:49:22.902530 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 03:49:22.902541 | orchestrator | Tuesday 17 February 2026 03:49:20 +0000 (0:00:00.319) 0:00:15.760 ****** 2026-02-17 03:49:22.902552 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902563 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902573 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902584 | orchestrator | 2026-02-17 03:49:22.902595 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 03:49:22.902606 | orchestrator | Tuesday 17 February 2026 03:49:20 +0000 (0:00:00.548) 0:00:16.308 ****** 2026-02-17 03:49:22.902617 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902628 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902639 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902649 | orchestrator | 2026-02-17 03:49:22.902660 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 03:49:22.902671 | orchestrator | Tuesday 17 February 2026 03:49:21 +0000 (0:00:00.344) 0:00:16.654 ****** 2026-02-17 03:49:22.902693 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902704 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902715 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902726 | orchestrator | 2026-02-17 03:49:22.902737 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 03:49:22.902748 | orchestrator | Tuesday 17 February 2026 03:49:21 +0000 (0:00:00.366) 0:00:17.020 ****** 2026-02-17 03:49:22.902759 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902770 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902781 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902792 | orchestrator | 2026-02-17 03:49:22.902802 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 03:49:22.902813 | orchestrator | Tuesday 17 February 2026 03:49:21 +0000 (0:00:00.347) 0:00:17.367 ****** 2026-02-17 03:49:22.902824 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902835 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902846 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902857 | orchestrator | 2026-02-17 03:49:22.902868 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 03:49:22.902880 | orchestrator | Tuesday 17 February 2026 03:49:22 +0000 (0:00:00.574) 0:00:17.942 ****** 2026-02-17 03:49:22.902891 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:22.902902 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:22.902913 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:22.902923 | orchestrator | 2026-02-17 03:49:22.902934 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 03:49:22.902945 | orchestrator | Tuesday 17 February 2026 03:49:22 +0000 (0:00:00.354) 0:00:18.297 ****** 2026-02-17 03:49:22.902977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.903112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.979973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.980103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.980134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:22.980191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.980330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:22.980359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.980380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:22.980411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.980432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:22.980454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:22.980474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:22.980508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.257630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.257721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.257732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.257765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.257793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.257812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.257828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.257851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.257865 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:23.257880 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:23.257894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.257908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.257923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.257945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.513555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.513664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.513706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.513722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.513735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.513749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-17 03:49:23.513795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.513823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.513838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.513853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.513867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-17 03:49:23.513882 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:23.513897 | orchestrator | 2026-02-17 03:49:23.513911 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 03:49:23.513926 | orchestrator | Tuesday 17 February 2026 03:49:23 +0000 (0:00:00.710) 0:00:19.008 ****** 2026-02-17 03:49:23.513948 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622205 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.622444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.742808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.742926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.742947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.742962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.743045 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.743064 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.743076 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.743086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.743095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.743103 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.743123 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.743140 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919338 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919562 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919579 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:23.919612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919624 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919636 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919655 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:23.919665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:23.919699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133143 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133312 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133416 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133435 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133495 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133529 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:24.133548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:34.864532 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:34.864629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-17-02-26-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-17 03:49:34.864660 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:34.864673 | orchestrator | 2026-02-17 03:49:34.864686 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 03:49:34.864699 | orchestrator | Tuesday 17 February 2026 03:49:24 +0000 (0:00:00.738) 0:00:19.746 ****** 2026-02-17 03:49:34.864709 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:34.864721 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:34.864732 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:34.864743 | orchestrator | 2026-02-17 03:49:34.864754 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 03:49:34.864766 | orchestrator | Tuesday 17 February 2026 03:49:25 +0000 (0:00:01.005) 0:00:20.752 ****** 2026-02-17 03:49:34.864778 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:34.864790 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:34.864802 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:34.864813 | orchestrator | 2026-02-17 03:49:34.864826 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 03:49:34.864837 | orchestrator | Tuesday 17 February 2026 03:49:25 +0000 (0:00:00.336) 0:00:21.089 ****** 2026-02-17 03:49:34.864844 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:34.864852 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:34.864859 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:34.864866 | orchestrator | 2026-02-17 03:49:34.864886 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 03:49:34.864894 | orchestrator | Tuesday 17 February 2026 03:49:26 +0000 (0:00:00.651) 0:00:21.740 ****** 2026-02-17 03:49:34.864901 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.864908 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:34.864916 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:34.864923 | orchestrator | 2026-02-17 03:49:34.864930 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 03:49:34.864937 | orchestrator | Tuesday 17 February 2026 03:49:26 +0000 (0:00:00.329) 0:00:22.070 ****** 2026-02-17 03:49:34.864945 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.864952 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:34.864959 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:34.864966 | orchestrator | 2026-02-17 03:49:34.864973 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 03:49:34.864980 | orchestrator | Tuesday 17 February 2026 03:49:27 +0000 (0:00:00.730) 0:00:22.800 ****** 2026-02-17 03:49:34.864988 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.864995 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:34.865002 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:34.865009 | orchestrator | 2026-02-17 03:49:34.865016 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 03:49:34.865024 | orchestrator | Tuesday 17 February 2026 03:49:27 +0000 (0:00:00.343) 0:00:23.143 ****** 2026-02-17 03:49:34.865031 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-17 03:49:34.865038 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-17 03:49:34.865046 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-17 03:49:34.865053 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-17 03:49:34.865060 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-17 03:49:34.865067 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-17 03:49:34.865074 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-17 03:49:34.865088 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-17 03:49:34.865096 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-17 03:49:34.865104 | orchestrator | 2026-02-17 03:49:34.865111 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 03:49:34.865119 | orchestrator | Tuesday 17 February 2026 03:49:28 +0000 (0:00:01.128) 0:00:24.272 ****** 2026-02-17 03:49:34.865142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 03:49:34.865151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 03:49:34.865158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 03:49:34.865165 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.865177 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 03:49:34.865188 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 03:49:34.865207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 03:49:34.865220 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:34.865230 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 03:49:34.865242 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 03:49:34.865253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 03:49:34.865348 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:34.865365 | orchestrator | 2026-02-17 03:49:34.865377 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 03:49:34.865387 | orchestrator | Tuesday 17 February 2026 03:49:29 +0000 (0:00:00.422) 0:00:24.694 ****** 2026-02-17 03:49:34.865400 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:49:34.865412 | orchestrator | 2026-02-17 03:49:34.865424 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 03:49:34.865438 | orchestrator | Tuesday 17 February 2026 03:49:29 +0000 (0:00:00.809) 0:00:25.504 ****** 2026-02-17 03:49:34.865450 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.865461 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:34.865473 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:34.865485 | orchestrator | 2026-02-17 03:49:34.865497 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 03:49:34.865509 | orchestrator | Tuesday 17 February 2026 03:49:30 +0000 (0:00:00.343) 0:00:25.847 ****** 2026-02-17 03:49:34.865521 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.865534 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:34.865547 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:34.865558 | orchestrator | 2026-02-17 03:49:34.865570 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 03:49:34.865583 | orchestrator | Tuesday 17 February 2026 03:49:30 +0000 (0:00:00.354) 0:00:26.202 ****** 2026-02-17 03:49:34.865596 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.865609 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:49:34.865621 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:49:34.865628 | orchestrator | 2026-02-17 03:49:34.865636 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 03:49:34.865643 | orchestrator | Tuesday 17 February 2026 03:49:31 +0000 (0:00:00.576) 0:00:26.778 ****** 2026-02-17 03:49:34.865651 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:34.865658 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:34.865665 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:34.865672 | orchestrator | 2026-02-17 03:49:34.865680 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 03:49:34.865687 | orchestrator | Tuesday 17 February 2026 03:49:31 +0000 (0:00:00.426) 0:00:27.204 ****** 2026-02-17 03:49:34.865694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:49:34.865712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:49:34.865726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:49:34.865733 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.865740 | orchestrator | 2026-02-17 03:49:34.865748 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 03:49:34.865755 | orchestrator | Tuesday 17 February 2026 03:49:31 +0000 (0:00:00.412) 0:00:27.617 ****** 2026-02-17 03:49:34.865762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:49:34.865770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:49:34.865777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:49:34.865784 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.865791 | orchestrator | 2026-02-17 03:49:34.865798 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 03:49:34.865806 | orchestrator | Tuesday 17 February 2026 03:49:32 +0000 (0:00:00.396) 0:00:28.013 ****** 2026-02-17 03:49:34.865813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 03:49:34.865820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 03:49:34.865827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 03:49:34.865834 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:49:34.865842 | orchestrator | 2026-02-17 03:49:34.865849 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 03:49:34.865856 | orchestrator | Tuesday 17 February 2026 03:49:32 +0000 (0:00:00.391) 0:00:28.405 ****** 2026-02-17 03:49:34.865863 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:49:34.865870 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:49:34.865877 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:49:34.865885 | orchestrator | 2026-02-17 03:49:34.865892 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 03:49:34.865899 | orchestrator | Tuesday 17 February 2026 03:49:33 +0000 (0:00:00.333) 0:00:28.739 ****** 2026-02-17 03:49:34.865906 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 03:49:34.865915 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 03:49:34.865927 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 03:49:34.865936 | orchestrator | 2026-02-17 03:49:34.865943 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 03:49:34.865950 | orchestrator | Tuesday 17 February 2026 03:49:33 +0000 (0:00:00.840) 0:00:29.580 ****** 2026-02-17 03:49:34.865958 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:49:34.865976 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:51:10.958308 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:51:10.958418 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 03:51:10.958428 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 03:51:10.958433 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 03:51:10.958438 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 03:51:10.958442 | orchestrator | 2026-02-17 03:51:10.958447 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 03:51:10.958452 | orchestrator | Tuesday 17 February 2026 03:49:34 +0000 (0:00:00.894) 0:00:30.474 ****** 2026-02-17 03:51:10.958456 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 03:51:10.958460 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 03:51:10.958464 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 03:51:10.958468 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 03:51:10.958490 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 03:51:10.958494 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 03:51:10.958498 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 03:51:10.958502 | orchestrator | 2026-02-17 03:51:10.958505 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-17 03:51:10.958509 | orchestrator | Tuesday 17 February 2026 03:49:36 +0000 (0:00:01.746) 0:00:32.221 ****** 2026-02-17 03:51:10.958513 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:51:10.958518 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:51:10.958524 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-17 03:51:10.958530 | orchestrator | 2026-02-17 03:51:10.958536 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-17 03:51:10.958542 | orchestrator | Tuesday 17 February 2026 03:49:37 +0000 (0:00:00.649) 0:00:32.870 ****** 2026-02-17 03:51:10.958551 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-17 03:51:10.958560 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-17 03:51:10.958578 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-17 03:51:10.958583 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-17 03:51:10.958587 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-17 03:51:10.958591 | orchestrator | 2026-02-17 03:51:10.958594 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-17 03:51:10.958598 | orchestrator | Tuesday 17 February 2026 03:50:21 +0000 (0:00:43.762) 0:01:16.633 ****** 2026-02-17 03:51:10.958602 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958606 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958609 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958617 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958620 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958624 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-17 03:51:10.958628 | orchestrator | 2026-02-17 03:51:10.958632 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-17 03:51:10.958635 | orchestrator | Tuesday 17 February 2026 03:50:42 +0000 (0:00:21.517) 0:01:38.150 ****** 2026-02-17 03:51:10.958659 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958668 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958671 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958675 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958679 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958683 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958688 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 03:51:10.958694 | orchestrator | 2026-02-17 03:51:10.958700 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-17 03:51:10.958706 | orchestrator | Tuesday 17 February 2026 03:50:53 +0000 (0:00:11.277) 0:01:49.427 ****** 2026-02-17 03:51:10.958712 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958718 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 03:51:10.958723 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 03:51:10.958730 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958737 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 03:51:10.958744 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 03:51:10.958751 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958758 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 03:51:10.958764 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 03:51:10.958768 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958772 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 03:51:10.958776 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 03:51:10.958780 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958783 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 03:51:10.958787 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 03:51:10.958791 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 03:51:10.958795 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 03:51:10.958798 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 03:51:10.958802 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-17 03:51:10.958806 | orchestrator | 2026-02-17 03:51:10.958810 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:51:10.958817 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-17 03:51:10.958822 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-17 03:51:10.958827 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-17 03:51:10.958831 | orchestrator | 2026-02-17 03:51:10.958835 | orchestrator | 2026-02-17 03:51:10.958839 | orchestrator | 2026-02-17 03:51:10.958843 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:51:10.958846 | orchestrator | Tuesday 17 February 2026 03:51:10 +0000 (0:00:17.131) 0:02:06.559 ****** 2026-02-17 03:51:10.958850 | orchestrator | =============================================================================== 2026-02-17 03:51:10.958859 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.76s 2026-02-17 03:51:10.958866 | orchestrator | generate keys ---------------------------------------------------------- 21.52s 2026-02-17 03:51:10.958871 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.13s 2026-02-17 03:51:10.958877 | orchestrator | get keys from monitors ------------------------------------------------- 11.28s 2026-02-17 03:51:10.958883 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.39s 2026-02-17 03:51:10.958889 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.75s 2026-02-17 03:51:10.958894 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.66s 2026-02-17 03:51:10.958902 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.25s 2026-02-17 03:51:10.958906 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.13s 2026-02-17 03:51:10.958910 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 1.01s 2026-02-17 03:51:10.958913 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.97s 2026-02-17 03:51:10.958917 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.89s 2026-02-17 03:51:10.958921 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.84s 2026-02-17 03:51:10.958928 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.81s 2026-02-17 03:51:11.385630 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.77s 2026-02-17 03:51:11.385728 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.74s 2026-02-17 03:51:11.385741 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.73s 2026-02-17 03:51:11.385752 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.73s 2026-02-17 03:51:11.385762 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.73s 2026-02-17 03:51:11.385772 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.71s 2026-02-17 03:51:14.227010 | orchestrator | 2026-02-17 03:51:14 | INFO  | Task 6d9a0f79-1093-43ad-b0c1-791a9ce2c917 (copy-ceph-keys) was prepared for execution. 2026-02-17 03:51:14.227089 | orchestrator | 2026-02-17 03:51:14 | INFO  | It takes a moment until task 6d9a0f79-1093-43ad-b0c1-791a9ce2c917 (copy-ceph-keys) has been started and output is visible here. 2026-02-17 03:51:53.072565 | orchestrator | 2026-02-17 03:51:53.072643 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-17 03:51:53.072650 | orchestrator | 2026-02-17 03:51:53.072655 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-17 03:51:53.072660 | orchestrator | Tuesday 17 February 2026 03:51:18 +0000 (0:00:00.182) 0:00:00.182 ****** 2026-02-17 03:51:53.072665 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-17 03:51:53.072671 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072675 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072679 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-17 03:51:53.072683 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072687 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-17 03:51:53.072691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-17 03:51:53.072695 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-17 03:51:53.072716 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-17 03:51:53.072720 | orchestrator | 2026-02-17 03:51:53.072724 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-17 03:51:53.072728 | orchestrator | Tuesday 17 February 2026 03:51:22 +0000 (0:00:04.169) 0:00:04.351 ****** 2026-02-17 03:51:53.072732 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-17 03:51:53.072746 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072750 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072754 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-17 03:51:53.072758 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072762 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-17 03:51:53.072766 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-17 03:51:53.072770 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-17 03:51:53.072774 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-17 03:51:53.072778 | orchestrator | 2026-02-17 03:51:53.072782 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-17 03:51:53.072786 | orchestrator | Tuesday 17 February 2026 03:51:26 +0000 (0:00:04.048) 0:00:08.399 ****** 2026-02-17 03:51:53.072791 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-17 03:51:53.072795 | orchestrator | 2026-02-17 03:51:53.072799 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-17 03:51:53.072803 | orchestrator | Tuesday 17 February 2026 03:51:27 +0000 (0:00:01.053) 0:00:09.453 ****** 2026-02-17 03:51:53.072808 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-17 03:51:53.072812 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072817 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072821 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-17 03:51:53.072825 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072829 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-17 03:51:53.072833 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-17 03:51:53.072837 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-17 03:51:53.072841 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-17 03:51:53.072845 | orchestrator | 2026-02-17 03:51:53.072849 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-17 03:51:53.072853 | orchestrator | Tuesday 17 February 2026 03:51:42 +0000 (0:00:14.517) 0:00:23.970 ****** 2026-02-17 03:51:53.072857 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-17 03:51:53.072861 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-17 03:51:53.072865 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-17 03:51:53.072869 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-17 03:51:53.072883 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-17 03:51:53.072892 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-17 03:51:53.072896 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-17 03:51:53.072900 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-17 03:51:53.072903 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-17 03:51:53.072907 | orchestrator | 2026-02-17 03:51:53.072911 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-17 03:51:53.072915 | orchestrator | Tuesday 17 February 2026 03:51:45 +0000 (0:00:03.399) 0:00:27.370 ****** 2026-02-17 03:51:53.072920 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-17 03:51:53.072924 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072928 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072932 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-17 03:51:53.072936 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-17 03:51:53.072940 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-17 03:51:53.072944 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-17 03:51:53.072948 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-17 03:51:53.072952 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-17 03:51:53.072956 | orchestrator | 2026-02-17 03:51:53.072960 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:51:53.072966 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:51:53.072974 | orchestrator | 2026-02-17 03:51:53.072980 | orchestrator | 2026-02-17 03:51:53.072986 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:51:53.072992 | orchestrator | Tuesday 17 February 2026 03:51:52 +0000 (0:00:06.846) 0:00:34.217 ****** 2026-02-17 03:51:53.072998 | orchestrator | =============================================================================== 2026-02-17 03:51:53.073004 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.52s 2026-02-17 03:51:53.073010 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.85s 2026-02-17 03:51:53.073015 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.17s 2026-02-17 03:51:53.073021 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.05s 2026-02-17 03:51:53.073027 | orchestrator | Check if target directories exist --------------------------------------- 3.40s 2026-02-17 03:51:53.073032 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-02-17 03:52:05.606828 | orchestrator | 2026-02-17 03:52:05 | INFO  | Task 20130b20-0fcc-46f0-b7a3-c5026c74db7a (cephclient) was prepared for execution. 2026-02-17 03:52:05.606976 | orchestrator | 2026-02-17 03:52:05 | INFO  | It takes a moment until task 20130b20-0fcc-46f0-b7a3-c5026c74db7a (cephclient) has been started and output is visible here. 2026-02-17 03:53:07.981633 | orchestrator | 2026-02-17 03:53:07.981755 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-17 03:53:07.981773 | orchestrator | 2026-02-17 03:53:07.981785 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-17 03:53:07.981797 | orchestrator | Tuesday 17 February 2026 03:52:10 +0000 (0:00:00.256) 0:00:00.256 ****** 2026-02-17 03:53:07.981809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-17 03:53:07.981846 | orchestrator | 2026-02-17 03:53:07.981858 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-17 03:53:07.981869 | orchestrator | Tuesday 17 February 2026 03:52:10 +0000 (0:00:00.252) 0:00:00.508 ****** 2026-02-17 03:53:07.981881 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-17 03:53:07.981892 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-17 03:53:07.981903 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-17 03:53:07.981915 | orchestrator | 2026-02-17 03:53:07.981926 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-17 03:53:07.981937 | orchestrator | Tuesday 17 February 2026 03:52:11 +0000 (0:00:01.367) 0:00:01.875 ****** 2026-02-17 03:53:07.981948 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-17 03:53:07.981960 | orchestrator | 2026-02-17 03:53:07.981971 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-17 03:53:07.981982 | orchestrator | Tuesday 17 February 2026 03:52:13 +0000 (0:00:01.582) 0:00:03.457 ****** 2026-02-17 03:53:07.981993 | orchestrator | changed: [testbed-manager] 2026-02-17 03:53:07.982004 | orchestrator | 2026-02-17 03:53:07.982083 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-17 03:53:07.982097 | orchestrator | Tuesday 17 February 2026 03:52:14 +0000 (0:00:00.935) 0:00:04.393 ****** 2026-02-17 03:53:07.982108 | orchestrator | changed: [testbed-manager] 2026-02-17 03:53:07.982119 | orchestrator | 2026-02-17 03:53:07.982131 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-17 03:53:07.982142 | orchestrator | Tuesday 17 February 2026 03:52:15 +0000 (0:00:01.065) 0:00:05.459 ****** 2026-02-17 03:53:07.982153 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-17 03:53:07.982164 | orchestrator | ok: [testbed-manager] 2026-02-17 03:53:07.982176 | orchestrator | 2026-02-17 03:53:07.982187 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-17 03:53:07.982198 | orchestrator | Tuesday 17 February 2026 03:52:57 +0000 (0:00:42.133) 0:00:47.593 ****** 2026-02-17 03:53:07.982210 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-17 03:53:07.982221 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-17 03:53:07.982233 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-17 03:53:07.982244 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-17 03:53:07.982255 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-17 03:53:07.982267 | orchestrator | 2026-02-17 03:53:07.982279 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-17 03:53:07.982290 | orchestrator | Tuesday 17 February 2026 03:53:01 +0000 (0:00:04.249) 0:00:51.842 ****** 2026-02-17 03:53:07.982301 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-17 03:53:07.982312 | orchestrator | 2026-02-17 03:53:07.982324 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-17 03:53:07.982335 | orchestrator | Tuesday 17 February 2026 03:53:02 +0000 (0:00:00.553) 0:00:52.396 ****** 2026-02-17 03:53:07.982346 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:53:07.982358 | orchestrator | 2026-02-17 03:53:07.982369 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-17 03:53:07.982380 | orchestrator | Tuesday 17 February 2026 03:53:02 +0000 (0:00:00.154) 0:00:52.550 ****** 2026-02-17 03:53:07.982391 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:53:07.982402 | orchestrator | 2026-02-17 03:53:07.982413 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-17 03:53:07.982425 | orchestrator | Tuesday 17 February 2026 03:53:03 +0000 (0:00:00.572) 0:00:53.123 ****** 2026-02-17 03:53:07.982451 | orchestrator | changed: [testbed-manager] 2026-02-17 03:53:07.982476 | orchestrator | 2026-02-17 03:53:07.982512 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-17 03:53:07.982548 | orchestrator | Tuesday 17 February 2026 03:53:04 +0000 (0:00:01.610) 0:00:54.733 ****** 2026-02-17 03:53:07.982560 | orchestrator | changed: [testbed-manager] 2026-02-17 03:53:07.982571 | orchestrator | 2026-02-17 03:53:07.982581 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-17 03:53:07.982592 | orchestrator | Tuesday 17 February 2026 03:53:05 +0000 (0:00:00.741) 0:00:55.475 ****** 2026-02-17 03:53:07.982603 | orchestrator | changed: [testbed-manager] 2026-02-17 03:53:07.982614 | orchestrator | 2026-02-17 03:53:07.982625 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-17 03:53:07.982636 | orchestrator | Tuesday 17 February 2026 03:53:06 +0000 (0:00:00.621) 0:00:56.097 ****** 2026-02-17 03:53:07.982647 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-17 03:53:07.982658 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-17 03:53:07.982668 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-17 03:53:07.982680 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-17 03:53:07.982690 | orchestrator | 2026-02-17 03:53:07.982702 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:53:07.982714 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 03:53:07.982726 | orchestrator | 2026-02-17 03:53:07.982737 | orchestrator | 2026-02-17 03:53:07.982767 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:53:07.982779 | orchestrator | Tuesday 17 February 2026 03:53:07 +0000 (0:00:01.585) 0:00:57.683 ****** 2026-02-17 03:53:07.982790 | orchestrator | =============================================================================== 2026-02-17 03:53:07.982801 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.13s 2026-02-17 03:53:07.982812 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.25s 2026-02-17 03:53:07.982823 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.61s 2026-02-17 03:53:07.982833 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.59s 2026-02-17 03:53:07.982844 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.58s 2026-02-17 03:53:07.982855 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.37s 2026-02-17 03:53:07.982866 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.07s 2026-02-17 03:53:07.982876 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.94s 2026-02-17 03:53:07.982887 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2026-02-17 03:53:07.982898 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-02-17 03:53:07.982908 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.57s 2026-02-17 03:53:07.982919 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.55s 2026-02-17 03:53:07.982930 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-02-17 03:53:07.982941 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-02-17 03:53:10.447682 | orchestrator | 2026-02-17 03:53:10 | INFO  | Task 446c2169-d935-4b82-b9f4-6832dd8d7b4e (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-17 03:53:10.447787 | orchestrator | 2026-02-17 03:53:10 | INFO  | It takes a moment until task 446c2169-d935-4b82-b9f4-6832dd8d7b4e (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-17 03:54:32.508085 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-17 03:54:32.508230 | orchestrator | 2.16.14 2026-02-17 03:54:32.508246 | orchestrator | 2026-02-17 03:54:32.508258 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-17 03:54:32.508269 | orchestrator | 2026-02-17 03:54:32.508280 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-17 03:54:32.508321 | orchestrator | Tuesday 17 February 2026 03:53:15 +0000 (0:00:00.334) 0:00:00.334 ****** 2026-02-17 03:54:32.508331 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508343 | orchestrator | 2026-02-17 03:54:32.508360 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-17 03:54:32.508376 | orchestrator | Tuesday 17 February 2026 03:53:17 +0000 (0:00:02.135) 0:00:02.470 ****** 2026-02-17 03:54:32.508392 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508407 | orchestrator | 2026-02-17 03:54:32.508422 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-17 03:54:32.508438 | orchestrator | Tuesday 17 February 2026 03:53:18 +0000 (0:00:01.108) 0:00:03.579 ****** 2026-02-17 03:54:32.508454 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508470 | orchestrator | 2026-02-17 03:54:32.508485 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-17 03:54:32.508501 | orchestrator | Tuesday 17 February 2026 03:53:19 +0000 (0:00:01.127) 0:00:04.707 ****** 2026-02-17 03:54:32.508517 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508534 | orchestrator | 2026-02-17 03:54:32.508551 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-17 03:54:32.508589 | orchestrator | Tuesday 17 February 2026 03:53:20 +0000 (0:00:01.234) 0:00:05.942 ****** 2026-02-17 03:54:32.508606 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508620 | orchestrator | 2026-02-17 03:54:32.508633 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-17 03:54:32.508669 | orchestrator | Tuesday 17 February 2026 03:53:22 +0000 (0:00:01.085) 0:00:07.027 ****** 2026-02-17 03:54:32.508679 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508689 | orchestrator | 2026-02-17 03:54:32.508699 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-17 03:54:32.508708 | orchestrator | Tuesday 17 February 2026 03:53:23 +0000 (0:00:01.112) 0:00:08.140 ****** 2026-02-17 03:54:32.508718 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508728 | orchestrator | 2026-02-17 03:54:32.508737 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-17 03:54:32.508747 | orchestrator | Tuesday 17 February 2026 03:53:25 +0000 (0:00:02.052) 0:00:10.193 ****** 2026-02-17 03:54:32.508757 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508766 | orchestrator | 2026-02-17 03:54:32.508776 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-17 03:54:32.508785 | orchestrator | Tuesday 17 February 2026 03:53:26 +0000 (0:00:01.281) 0:00:11.474 ****** 2026-02-17 03:54:32.508795 | orchestrator | changed: [testbed-manager] 2026-02-17 03:54:32.508804 | orchestrator | 2026-02-17 03:54:32.508814 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-17 03:54:32.508823 | orchestrator | Tuesday 17 February 2026 03:54:07 +0000 (0:00:40.978) 0:00:52.453 ****** 2026-02-17 03:54:32.508833 | orchestrator | skipping: [testbed-manager] 2026-02-17 03:54:32.508842 | orchestrator | 2026-02-17 03:54:32.508852 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-17 03:54:32.508861 | orchestrator | 2026-02-17 03:54:32.508871 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-17 03:54:32.508881 | orchestrator | Tuesday 17 February 2026 03:54:07 +0000 (0:00:00.174) 0:00:52.627 ****** 2026-02-17 03:54:32.508890 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:54:32.508900 | orchestrator | 2026-02-17 03:54:32.508909 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-17 03:54:32.508919 | orchestrator | 2026-02-17 03:54:32.508928 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-17 03:54:32.508938 | orchestrator | Tuesday 17 February 2026 03:54:19 +0000 (0:00:11.818) 0:01:04.445 ****** 2026-02-17 03:54:32.508947 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:54:32.508957 | orchestrator | 2026-02-17 03:54:32.508966 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-17 03:54:32.508986 | orchestrator | 2026-02-17 03:54:32.508996 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-17 03:54:32.509006 | orchestrator | Tuesday 17 February 2026 03:54:30 +0000 (0:00:11.242) 0:01:15.688 ****** 2026-02-17 03:54:32.509017 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:54:32.509026 | orchestrator | 2026-02-17 03:54:32.509036 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:54:32.509047 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 03:54:32.509060 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:54:32.509070 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:54:32.509080 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 03:54:32.509096 | orchestrator | 2026-02-17 03:54:32.509112 | orchestrator | 2026-02-17 03:54:32.509128 | orchestrator | 2026-02-17 03:54:32.509143 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:54:32.509158 | orchestrator | Tuesday 17 February 2026 03:54:32 +0000 (0:00:01.375) 0:01:17.064 ****** 2026-02-17 03:54:32.509174 | orchestrator | =============================================================================== 2026-02-17 03:54:32.509191 | orchestrator | Create admin user ------------------------------------------------------ 40.98s 2026-02-17 03:54:32.509236 | orchestrator | Restart ceph manager service ------------------------------------------- 24.44s 2026-02-17 03:54:32.509252 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.14s 2026-02-17 03:54:32.509269 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.05s 2026-02-17 03:54:32.509286 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.28s 2026-02-17 03:54:32.509302 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.23s 2026-02-17 03:54:32.509318 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.13s 2026-02-17 03:54:32.509333 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2026-02-17 03:54:32.509349 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2026-02-17 03:54:32.509367 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.09s 2026-02-17 03:54:32.509392 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-02-17 03:54:32.851246 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-17 03:54:34.973181 | orchestrator | 2026-02-17 03:54:34 | INFO  | Task ce1af7ae-c3af-46ad-bbd4-c47e758c52d9 (keystone) was prepared for execution. 2026-02-17 03:54:34.973290 | orchestrator | 2026-02-17 03:54:34 | INFO  | It takes a moment until task ce1af7ae-c3af-46ad-bbd4-c47e758c52d9 (keystone) has been started and output is visible here. 2026-02-17 03:54:42.598891 | orchestrator | 2026-02-17 03:54:42.599010 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:54:42.599027 | orchestrator | 2026-02-17 03:54:42.599039 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:54:42.599068 | orchestrator | Tuesday 17 February 2026 03:54:39 +0000 (0:00:00.260) 0:00:00.260 ****** 2026-02-17 03:54:42.599081 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:54:42.599093 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:54:42.599104 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:54:42.599116 | orchestrator | 2026-02-17 03:54:42.599127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:54:42.599138 | orchestrator | Tuesday 17 February 2026 03:54:39 +0000 (0:00:00.338) 0:00:00.599 ****** 2026-02-17 03:54:42.599175 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-17 03:54:42.599187 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-17 03:54:42.599198 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-17 03:54:42.599208 | orchestrator | 2026-02-17 03:54:42.599219 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-17 03:54:42.599230 | orchestrator | 2026-02-17 03:54:42.599241 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-17 03:54:42.599252 | orchestrator | Tuesday 17 February 2026 03:54:40 +0000 (0:00:00.497) 0:00:01.096 ****** 2026-02-17 03:54:42.599263 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:54:42.599275 | orchestrator | 2026-02-17 03:54:42.599285 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-17 03:54:42.599296 | orchestrator | Tuesday 17 February 2026 03:54:40 +0000 (0:00:00.625) 0:00:01.721 ****** 2026-02-17 03:54:42.599313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:42.599329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:42.599368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:42.599391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:42.599405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:42.599417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:42.599431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:42.599444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:42.599457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:42.599477 | orchestrator | 2026-02-17 03:54:42.599490 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-17 03:54:42.599509 | orchestrator | Tuesday 17 February 2026 03:54:42 +0000 (0:00:01.814) 0:00:03.536 ****** 2026-02-17 03:54:48.598425 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:54:48.598525 | orchestrator | 2026-02-17 03:54:48.598535 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-17 03:54:48.598556 | orchestrator | Tuesday 17 February 2026 03:54:42 +0000 (0:00:00.290) 0:00:03.826 ****** 2026-02-17 03:54:48.598563 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:54:48.598569 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:54:48.598576 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:54:48.598627 | orchestrator | 2026-02-17 03:54:48.598636 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-17 03:54:48.598643 | orchestrator | Tuesday 17 February 2026 03:54:43 +0000 (0:00:00.324) 0:00:04.151 ****** 2026-02-17 03:54:48.598650 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 03:54:48.598656 | orchestrator | 2026-02-17 03:54:48.598662 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-17 03:54:48.598669 | orchestrator | Tuesday 17 February 2026 03:54:44 +0000 (0:00:00.973) 0:00:05.124 ****** 2026-02-17 03:54:48.598676 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:54:48.598683 | orchestrator | 2026-02-17 03:54:48.598690 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-17 03:54:48.598700 | orchestrator | Tuesday 17 February 2026 03:54:44 +0000 (0:00:00.618) 0:00:05.743 ****** 2026-02-17 03:54:48.598717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:48.598738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:48.598750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:48.598808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:48.598823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:48.598834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:48.598844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:48.598853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:48.598871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:48.598881 | orchestrator | 2026-02-17 03:54:48.598891 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-17 03:54:48.598902 | orchestrator | Tuesday 17 February 2026 03:54:47 +0000 (0:00:03.172) 0:00:08.915 ****** 2026-02-17 03:54:48.598922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:54:49.451183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:54:49.451288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:54:49.451304 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:54:49.451321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:54:49.451355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:54:49.451373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:54:49.451386 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:54:49.451418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:54:49.451432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:54:49.451444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:54:49.451464 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:54:49.451476 | orchestrator | 2026-02-17 03:54:49.451488 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-17 03:54:49.451502 | orchestrator | Tuesday 17 February 2026 03:54:48 +0000 (0:00:00.628) 0:00:09.543 ****** 2026-02-17 03:54:49.451514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:54:49.451532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:54:49.451553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:54:52.729350 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:54:52.729459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:54:52.729478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:54:52.729513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:54:52.729523 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:54:52.729547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:54:52.729557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:54:52.729583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:54:52.729625 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:54:52.729635 | orchestrator | 2026-02-17 03:54:52.729645 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-17 03:54:52.729655 | orchestrator | Tuesday 17 February 2026 03:54:49 +0000 (0:00:00.853) 0:00:10.396 ****** 2026-02-17 03:54:52.729665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:52.729682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:52.729699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:52.729732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:57.945666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:57.946637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:54:57.946674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:57.946686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:57.946718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:54:57.946735 | orchestrator | 2026-02-17 03:54:57.946753 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-17 03:54:57.946770 | orchestrator | Tuesday 17 February 2026 03:54:52 +0000 (0:00:03.278) 0:00:13.675 ****** 2026-02-17 03:54:57.946808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:57.946830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:57.946841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:54:57.946851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:54:57.946866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:54:57.946883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:55:01.838279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:55:01.838430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:55:01.838454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:55:01.838471 | orchestrator | 2026-02-17 03:55:01.838489 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-17 03:55:01.838505 | orchestrator | Tuesday 17 February 2026 03:54:57 +0000 (0:00:05.211) 0:00:18.887 ****** 2026-02-17 03:55:01.838520 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:55:01.838536 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:55:01.838550 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:55:01.838564 | orchestrator | 2026-02-17 03:55:01.838580 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-17 03:55:01.838595 | orchestrator | Tuesday 17 February 2026 03:54:59 +0000 (0:00:01.441) 0:00:20.328 ****** 2026-02-17 03:55:01.838669 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:55:01.838685 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:55:01.838701 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:55:01.838715 | orchestrator | 2026-02-17 03:55:01.838729 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-17 03:55:01.838743 | orchestrator | Tuesday 17 February 2026 03:55:00 +0000 (0:00:00.867) 0:00:21.196 ****** 2026-02-17 03:55:01.838758 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:55:01.838773 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:55:01.838787 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:55:01.838803 | orchestrator | 2026-02-17 03:55:01.838835 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-17 03:55:01.838851 | orchestrator | Tuesday 17 February 2026 03:55:00 +0000 (0:00:00.633) 0:00:21.829 ****** 2026-02-17 03:55:01.838867 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:55:01.838882 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:55:01.838897 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:55:01.838912 | orchestrator | 2026-02-17 03:55:01.838928 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-17 03:55:01.838943 | orchestrator | Tuesday 17 February 2026 03:55:01 +0000 (0:00:00.333) 0:00:22.163 ****** 2026-02-17 03:55:01.838983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:55:01.839013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:55:01.839029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:55:01.839045 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:55:01.839061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:55:01.839084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:55:01.839099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:55:01.839126 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:55:01.839153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-17 03:55:21.779942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 03:55:21.780080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 03:55:21.780110 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:55:21.780132 | orchestrator | 2026-02-17 03:55:21.780150 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-17 03:55:21.780170 | orchestrator | Tuesday 17 February 2026 03:55:01 +0000 (0:00:00.617) 0:00:22.780 ****** 2026-02-17 03:55:21.780187 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:55:21.780205 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:55:21.780223 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:55:21.780240 | orchestrator | 2026-02-17 03:55:21.780258 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-17 03:55:21.780277 | orchestrator | Tuesday 17 February 2026 03:55:02 +0000 (0:00:00.394) 0:00:23.174 ****** 2026-02-17 03:55:21.780296 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-17 03:55:21.780315 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-17 03:55:21.780366 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-17 03:55:21.780386 | orchestrator | 2026-02-17 03:55:21.780424 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-17 03:55:21.780444 | orchestrator | Tuesday 17 February 2026 03:55:04 +0000 (0:00:02.020) 0:00:25.195 ****** 2026-02-17 03:55:21.780462 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 03:55:21.780482 | orchestrator | 2026-02-17 03:55:21.780501 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-17 03:55:21.780520 | orchestrator | Tuesday 17 February 2026 03:55:05 +0000 (0:00:01.078) 0:00:26.274 ****** 2026-02-17 03:55:21.780537 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:55:21.780549 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:55:21.780559 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:55:21.780570 | orchestrator | 2026-02-17 03:55:21.780581 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-17 03:55:21.780592 | orchestrator | Tuesday 17 February 2026 03:55:06 +0000 (0:00:00.718) 0:00:26.992 ****** 2026-02-17 03:55:21.780603 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 03:55:21.780614 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 03:55:21.780656 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 03:55:21.780668 | orchestrator | 2026-02-17 03:55:21.780679 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-17 03:55:21.780690 | orchestrator | Tuesday 17 February 2026 03:55:07 +0000 (0:00:01.129) 0:00:28.122 ****** 2026-02-17 03:55:21.780702 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:55:21.780714 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:55:21.780725 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:55:21.780735 | orchestrator | 2026-02-17 03:55:21.780746 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-17 03:55:21.780757 | orchestrator | Tuesday 17 February 2026 03:55:07 +0000 (0:00:00.546) 0:00:28.668 ****** 2026-02-17 03:55:21.780768 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-17 03:55:21.780780 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-17 03:55:21.780791 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-17 03:55:21.780802 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-17 03:55:21.780813 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-17 03:55:21.780825 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-17 03:55:21.780835 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-17 03:55:21.780847 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-17 03:55:21.780880 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-17 03:55:21.780892 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-17 03:55:21.780903 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-17 03:55:21.780913 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-17 03:55:21.780924 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-17 03:55:21.780935 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-17 03:55:21.780946 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-17 03:55:21.780969 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-17 03:55:21.780980 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-17 03:55:21.780991 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-17 03:55:21.781002 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-17 03:55:21.781013 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-17 03:55:21.781024 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-17 03:55:21.781035 | orchestrator | 2026-02-17 03:55:21.781046 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-17 03:55:21.781057 | orchestrator | Tuesday 17 February 2026 03:55:16 +0000 (0:00:09.037) 0:00:37.705 ****** 2026-02-17 03:55:21.781067 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-17 03:55:21.781078 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-17 03:55:21.781089 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-17 03:55:21.781099 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-17 03:55:21.781110 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-17 03:55:21.781121 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-17 03:55:21.781132 | orchestrator | 2026-02-17 03:55:21.781143 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-17 03:55:21.781160 | orchestrator | Tuesday 17 February 2026 03:55:19 +0000 (0:00:02.750) 0:00:40.456 ****** 2026-02-17 03:55:21.781175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:55:21.781199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:56:59.850138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-17 03:56:59.850257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:56:59.850277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:56:59.850284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-17 03:56:59.850290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:56:59.850309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:56:59.850320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-17 03:56:59.850326 | orchestrator | 2026-02-17 03:56:59.850333 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-17 03:56:59.850340 | orchestrator | Tuesday 17 February 2026 03:55:21 +0000 (0:00:02.264) 0:00:42.720 ****** 2026-02-17 03:56:59.850346 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:56:59.850353 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:56:59.850359 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:56:59.850368 | orchestrator | 2026-02-17 03:56:59.850377 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-17 03:56:59.850387 | orchestrator | Tuesday 17 February 2026 03:55:22 +0000 (0:00:00.558) 0:00:43.278 ****** 2026-02-17 03:56:59.850396 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:56:59.850406 | orchestrator | 2026-02-17 03:56:59.850415 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-17 03:56:59.850424 | orchestrator | Tuesday 17 February 2026 03:55:24 +0000 (0:00:02.454) 0:00:45.733 ****** 2026-02-17 03:56:59.850433 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:56:59.850443 | orchestrator | 2026-02-17 03:56:59.850453 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-17 03:56:59.850462 | orchestrator | Tuesday 17 February 2026 03:55:26 +0000 (0:00:02.205) 0:00:47.938 ****** 2026-02-17 03:56:59.850484 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:56:59.850495 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:56:59.850512 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:56:59.850520 | orchestrator | 2026-02-17 03:56:59.850529 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-17 03:56:59.850538 | orchestrator | Tuesday 17 February 2026 03:55:27 +0000 (0:00:00.895) 0:00:48.834 ****** 2026-02-17 03:56:59.850547 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:56:59.850556 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:56:59.850564 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:56:59.850573 | orchestrator | 2026-02-17 03:56:59.850583 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-17 03:56:59.850599 | orchestrator | Tuesday 17 February 2026 03:55:28 +0000 (0:00:00.350) 0:00:49.184 ****** 2026-02-17 03:56:59.850605 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:56:59.850611 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:56:59.850616 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:56:59.850622 | orchestrator | 2026-02-17 03:56:59.850627 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-17 03:56:59.850633 | orchestrator | Tuesday 17 February 2026 03:55:28 +0000 (0:00:00.579) 0:00:49.763 ****** 2026-02-17 03:56:59.850639 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:56:59.850645 | orchestrator | 2026-02-17 03:56:59.850651 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-17 03:56:59.850658 | orchestrator | Tuesday 17 February 2026 03:55:42 +0000 (0:00:13.859) 0:01:03.623 ****** 2026-02-17 03:56:59.850664 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:56:59.850670 | orchestrator | 2026-02-17 03:56:59.850676 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-17 03:56:59.850682 | orchestrator | Tuesday 17 February 2026 03:55:53 +0000 (0:00:10.439) 0:01:14.062 ****** 2026-02-17 03:56:59.850694 | orchestrator | 2026-02-17 03:56:59.850700 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-17 03:56:59.850706 | orchestrator | Tuesday 17 February 2026 03:55:53 +0000 (0:00:00.071) 0:01:14.133 ****** 2026-02-17 03:56:59.850712 | orchestrator | 2026-02-17 03:56:59.850737 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-17 03:56:59.850744 | orchestrator | Tuesday 17 February 2026 03:55:53 +0000 (0:00:00.073) 0:01:14.206 ****** 2026-02-17 03:56:59.850750 | orchestrator | 2026-02-17 03:56:59.850756 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-17 03:56:59.850762 | orchestrator | Tuesday 17 February 2026 03:55:53 +0000 (0:00:00.072) 0:01:14.279 ****** 2026-02-17 03:56:59.850768 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:56:59.850775 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:56:59.850781 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:56:59.850787 | orchestrator | 2026-02-17 03:56:59.850794 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-17 03:56:59.850804 | orchestrator | Tuesday 17 February 2026 03:56:41 +0000 (0:00:48.435) 0:02:02.715 ****** 2026-02-17 03:56:59.850813 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:56:59.850823 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:56:59.850833 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:56:59.850844 | orchestrator | 2026-02-17 03:56:59.850855 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-17 03:56:59.850865 | orchestrator | Tuesday 17 February 2026 03:56:52 +0000 (0:00:10.370) 0:02:13.086 ****** 2026-02-17 03:56:59.850874 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:56:59.850881 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:56:59.850887 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:56:59.850893 | orchestrator | 2026-02-17 03:56:59.850899 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-17 03:56:59.850905 | orchestrator | Tuesday 17 February 2026 03:56:59 +0000 (0:00:07.130) 0:02:20.216 ****** 2026-02-17 03:56:59.850919 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:57:47.302739 | orchestrator | 2026-02-17 03:57:47.302873 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-17 03:57:47.302888 | orchestrator | Tuesday 17 February 2026 03:56:59 +0000 (0:00:00.580) 0:02:20.796 ****** 2026-02-17 03:57:47.302897 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:57:47.302907 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:57:47.302916 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:57:47.302924 | orchestrator | 2026-02-17 03:57:47.302932 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-17 03:57:47.302941 | orchestrator | Tuesday 17 February 2026 03:57:00 +0000 (0:00:01.107) 0:02:21.904 ****** 2026-02-17 03:57:47.302949 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:57:47.302958 | orchestrator | 2026-02-17 03:57:47.302975 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-17 03:57:47.302984 | orchestrator | Tuesday 17 February 2026 03:57:02 +0000 (0:00:01.777) 0:02:23.682 ****** 2026-02-17 03:57:47.302992 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-17 03:57:47.303000 | orchestrator | 2026-02-17 03:57:47.303008 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-17 03:57:47.303015 | orchestrator | Tuesday 17 February 2026 03:57:13 +0000 (0:00:10.467) 0:02:34.150 ****** 2026-02-17 03:57:47.303023 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-17 03:57:47.303031 | orchestrator | 2026-02-17 03:57:47.303039 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-17 03:57:47.303047 | orchestrator | Tuesday 17 February 2026 03:57:36 +0000 (0:00:23.064) 0:02:57.214 ****** 2026-02-17 03:57:47.303055 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-17 03:57:47.303085 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-17 03:57:47.303094 | orchestrator | 2026-02-17 03:57:47.303102 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-17 03:57:47.303110 | orchestrator | Tuesday 17 February 2026 03:57:42 +0000 (0:00:05.931) 0:03:03.146 ****** 2026-02-17 03:57:47.303117 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:57:47.303125 | orchestrator | 2026-02-17 03:57:47.303133 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-17 03:57:47.303141 | orchestrator | Tuesday 17 February 2026 03:57:42 +0000 (0:00:00.136) 0:03:03.283 ****** 2026-02-17 03:57:47.303149 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:57:47.303157 | orchestrator | 2026-02-17 03:57:47.303164 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-17 03:57:47.303172 | orchestrator | Tuesday 17 February 2026 03:57:42 +0000 (0:00:00.145) 0:03:03.428 ****** 2026-02-17 03:57:47.303180 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:57:47.303188 | orchestrator | 2026-02-17 03:57:47.303209 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-17 03:57:47.303217 | orchestrator | Tuesday 17 February 2026 03:57:42 +0000 (0:00:00.165) 0:03:03.593 ****** 2026-02-17 03:57:47.303225 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:57:47.303233 | orchestrator | 2026-02-17 03:57:47.303240 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-17 03:57:47.303248 | orchestrator | Tuesday 17 February 2026 03:57:43 +0000 (0:00:00.515) 0:03:04.108 ****** 2026-02-17 03:57:47.303256 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:57:47.303264 | orchestrator | 2026-02-17 03:57:47.303271 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-17 03:57:47.303279 | orchestrator | Tuesday 17 February 2026 03:57:46 +0000 (0:00:03.313) 0:03:07.422 ****** 2026-02-17 03:57:47.303289 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:57:47.303298 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:57:47.303307 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:57:47.303316 | orchestrator | 2026-02-17 03:57:47.303325 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:57:47.303335 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 03:57:47.303346 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-17 03:57:47.303356 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-17 03:57:47.303365 | orchestrator | 2026-02-17 03:57:47.303374 | orchestrator | 2026-02-17 03:57:47.303383 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:57:47.303393 | orchestrator | Tuesday 17 February 2026 03:57:46 +0000 (0:00:00.462) 0:03:07.884 ****** 2026-02-17 03:57:47.303402 | orchestrator | =============================================================================== 2026-02-17 03:57:47.303410 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 48.44s 2026-02-17 03:57:47.303419 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.06s 2026-02-17 03:57:47.303427 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.86s 2026-02-17 03:57:47.303435 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.47s 2026-02-17 03:57:47.303443 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.44s 2026-02-17 03:57:47.303450 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.37s 2026-02-17 03:57:47.303458 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.04s 2026-02-17 03:57:47.303466 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.13s 2026-02-17 03:57:47.303480 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.93s 2026-02-17 03:57:47.303502 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.21s 2026-02-17 03:57:47.303510 | orchestrator | keystone : Creating default user role ----------------------------------- 3.31s 2026-02-17 03:57:47.303518 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.28s 2026-02-17 03:57:47.303526 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.17s 2026-02-17 03:57:47.303534 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.75s 2026-02-17 03:57:47.303542 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.45s 2026-02-17 03:57:47.303549 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.26s 2026-02-17 03:57:47.303557 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.21s 2026-02-17 03:57:47.303565 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.02s 2026-02-17 03:57:47.303573 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.81s 2026-02-17 03:57:47.303580 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2026-02-17 03:57:49.667245 | orchestrator | 2026-02-17 03:57:49 | INFO  | Task 71fc90d6-ffb2-4254-b1fc-e88ea0941c69 (placement) was prepared for execution. 2026-02-17 03:57:49.667346 | orchestrator | 2026-02-17 03:57:49 | INFO  | It takes a moment until task 71fc90d6-ffb2-4254-b1fc-e88ea0941c69 (placement) has been started and output is visible here. 2026-02-17 03:58:24.202203 | orchestrator | 2026-02-17 03:58:24.202325 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:58:24.202343 | orchestrator | 2026-02-17 03:58:24.202356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:58:24.202368 | orchestrator | Tuesday 17 February 2026 03:57:53 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-02-17 03:58:24.202380 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:58:24.202392 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:58:24.202404 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:58:24.202416 | orchestrator | 2026-02-17 03:58:24.202427 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:58:24.202438 | orchestrator | Tuesday 17 February 2026 03:57:54 +0000 (0:00:00.302) 0:00:00.558 ****** 2026-02-17 03:58:24.202449 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-17 03:58:24.202461 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-17 03:58:24.202471 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-17 03:58:24.202482 | orchestrator | 2026-02-17 03:58:24.202508 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-17 03:58:24.202520 | orchestrator | 2026-02-17 03:58:24.202531 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-17 03:58:24.202542 | orchestrator | Tuesday 17 February 2026 03:57:54 +0000 (0:00:00.461) 0:00:01.019 ****** 2026-02-17 03:58:24.202554 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:58:24.202566 | orchestrator | 2026-02-17 03:58:24.202577 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-17 03:58:24.202587 | orchestrator | Tuesday 17 February 2026 03:57:55 +0000 (0:00:00.524) 0:00:01.544 ****** 2026-02-17 03:58:24.202598 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-17 03:58:24.202609 | orchestrator | 2026-02-17 03:58:24.202620 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-17 03:58:24.202631 | orchestrator | Tuesday 17 February 2026 03:57:58 +0000 (0:00:03.620) 0:00:05.165 ****** 2026-02-17 03:58:24.202642 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-17 03:58:24.202677 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-17 03:58:24.202690 | orchestrator | 2026-02-17 03:58:24.202701 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-17 03:58:24.202712 | orchestrator | Tuesday 17 February 2026 03:58:05 +0000 (0:00:06.508) 0:00:11.673 ****** 2026-02-17 03:58:24.202723 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-17 03:58:24.202734 | orchestrator | 2026-02-17 03:58:24.202747 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-17 03:58:24.202759 | orchestrator | Tuesday 17 February 2026 03:58:08 +0000 (0:00:03.557) 0:00:15.230 ****** 2026-02-17 03:58:24.202771 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 03:58:24.202784 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-17 03:58:24.202796 | orchestrator | 2026-02-17 03:58:24.202838 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-17 03:58:24.202850 | orchestrator | Tuesday 17 February 2026 03:58:13 +0000 (0:00:04.445) 0:00:19.676 ****** 2026-02-17 03:58:24.202863 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 03:58:24.202875 | orchestrator | 2026-02-17 03:58:24.202887 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-17 03:58:24.202901 | orchestrator | Tuesday 17 February 2026 03:58:16 +0000 (0:00:03.151) 0:00:22.828 ****** 2026-02-17 03:58:24.202913 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-17 03:58:24.202925 | orchestrator | 2026-02-17 03:58:24.202937 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-17 03:58:24.202950 | orchestrator | Tuesday 17 February 2026 03:58:20 +0000 (0:00:03.764) 0:00:26.592 ****** 2026-02-17 03:58:24.202961 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:58:24.202974 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:58:24.202986 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:58:24.202999 | orchestrator | 2026-02-17 03:58:24.203012 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-17 03:58:24.203024 | orchestrator | Tuesday 17 February 2026 03:58:20 +0000 (0:00:00.287) 0:00:26.879 ****** 2026-02-17 03:58:24.203040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:24.203083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:24.203106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:24.203118 | orchestrator | 2026-02-17 03:58:24.203129 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-17 03:58:24.203140 | orchestrator | Tuesday 17 February 2026 03:58:21 +0000 (0:00:01.033) 0:00:27.913 ****** 2026-02-17 03:58:24.203152 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:58:24.203163 | orchestrator | 2026-02-17 03:58:24.203174 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-17 03:58:24.203185 | orchestrator | Tuesday 17 February 2026 03:58:21 +0000 (0:00:00.323) 0:00:28.236 ****** 2026-02-17 03:58:24.203195 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:58:24.203206 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:58:24.203217 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:58:24.203228 | orchestrator | 2026-02-17 03:58:24.203238 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-17 03:58:24.203249 | orchestrator | Tuesday 17 February 2026 03:58:22 +0000 (0:00:00.312) 0:00:28.548 ****** 2026-02-17 03:58:24.203260 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 03:58:24.203271 | orchestrator | 2026-02-17 03:58:24.203282 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-17 03:58:24.203293 | orchestrator | Tuesday 17 February 2026 03:58:22 +0000 (0:00:00.551) 0:00:29.100 ****** 2026-02-17 03:58:24.203304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:24.203324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:26.937563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:26.937724 | orchestrator | 2026-02-17 03:58:26.937748 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-17 03:58:26.937763 | orchestrator | Tuesday 17 February 2026 03:58:24 +0000 (0:00:01.604) 0:00:30.704 ****** 2026-02-17 03:58:26.937777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:26.937790 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:58:26.937849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:26.937863 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:58:26.937876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:26.937914 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:58:26.937927 | orchestrator | 2026-02-17 03:58:26.937939 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-17 03:58:26.937972 | orchestrator | Tuesday 17 February 2026 03:58:24 +0000 (0:00:00.482) 0:00:31.187 ****** 2026-02-17 03:58:26.937994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:26.938007 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:58:26.938077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:26.938092 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:58:26.938106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:26.938118 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:58:26.938131 | orchestrator | 2026-02-17 03:58:26.938143 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-17 03:58:26.938155 | orchestrator | Tuesday 17 February 2026 03:58:25 +0000 (0:00:00.673) 0:00:31.860 ****** 2026-02-17 03:58:26.938169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:26.938209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:33.667065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:33.667175 | orchestrator | 2026-02-17 03:58:33.667191 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-17 03:58:33.667205 | orchestrator | Tuesday 17 February 2026 03:58:26 +0000 (0:00:01.585) 0:00:33.446 ****** 2026-02-17 03:58:33.667217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:33.667230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:33.667278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:33.667291 | orchestrator | 2026-02-17 03:58:33.667302 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-17 03:58:33.667314 | orchestrator | Tuesday 17 February 2026 03:58:29 +0000 (0:00:02.242) 0:00:35.688 ****** 2026-02-17 03:58:33.667342 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-17 03:58:33.667355 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-17 03:58:33.667366 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-17 03:58:33.667376 | orchestrator | 2026-02-17 03:58:33.667387 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-17 03:58:33.667398 | orchestrator | Tuesday 17 February 2026 03:58:30 +0000 (0:00:01.411) 0:00:37.099 ****** 2026-02-17 03:58:33.667410 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:58:33.667422 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:58:33.667433 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:58:33.667444 | orchestrator | 2026-02-17 03:58:33.667455 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-17 03:58:33.667466 | orchestrator | Tuesday 17 February 2026 03:58:31 +0000 (0:00:01.310) 0:00:38.410 ****** 2026-02-17 03:58:33.667478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:33.667489 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:58:33.667501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:33.667520 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:58:33.667532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-17 03:58:33.667543 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:58:33.667554 | orchestrator | 2026-02-17 03:58:33.667565 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-17 03:58:33.667581 | orchestrator | Tuesday 17 February 2026 03:58:32 +0000 (0:00:00.731) 0:00:39.142 ****** 2026-02-17 03:58:33.667603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:56.733792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:56.733985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-17 03:58:56.734004 | orchestrator | 2026-02-17 03:58:56.734088 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-17 03:58:56.734106 | orchestrator | Tuesday 17 February 2026 03:58:33 +0000 (0:00:01.037) 0:00:40.179 ****** 2026-02-17 03:58:56.734118 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:58:56.734161 | orchestrator | 2026-02-17 03:58:56.734174 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-17 03:58:56.734185 | orchestrator | Tuesday 17 February 2026 03:58:35 +0000 (0:00:01.992) 0:00:42.171 ****** 2026-02-17 03:58:56.734196 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:58:56.734207 | orchestrator | 2026-02-17 03:58:56.734218 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-17 03:58:56.734229 | orchestrator | Tuesday 17 February 2026 03:58:37 +0000 (0:00:02.102) 0:00:44.274 ****** 2026-02-17 03:58:56.734240 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:58:56.734250 | orchestrator | 2026-02-17 03:58:56.734261 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-17 03:58:56.734272 | orchestrator | Tuesday 17 February 2026 03:58:50 +0000 (0:00:13.120) 0:00:57.394 ****** 2026-02-17 03:58:56.734283 | orchestrator | 2026-02-17 03:58:56.734294 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-17 03:58:56.734304 | orchestrator | Tuesday 17 February 2026 03:58:50 +0000 (0:00:00.068) 0:00:57.463 ****** 2026-02-17 03:58:56.734315 | orchestrator | 2026-02-17 03:58:56.734326 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-17 03:58:56.734338 | orchestrator | Tuesday 17 February 2026 03:58:51 +0000 (0:00:00.074) 0:00:57.538 ****** 2026-02-17 03:58:56.734350 | orchestrator | 2026-02-17 03:58:56.734362 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-17 03:58:56.734374 | orchestrator | Tuesday 17 February 2026 03:58:51 +0000 (0:00:00.069) 0:00:57.607 ****** 2026-02-17 03:58:56.734387 | orchestrator | changed: [testbed-node-0] 2026-02-17 03:58:56.734415 | orchestrator | changed: [testbed-node-2] 2026-02-17 03:58:56.734427 | orchestrator | changed: [testbed-node-1] 2026-02-17 03:58:56.734439 | orchestrator | 2026-02-17 03:58:56.734452 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 03:58:56.734466 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 03:58:56.734480 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 03:58:56.734493 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 03:58:56.734505 | orchestrator | 2026-02-17 03:58:56.734518 | orchestrator | 2026-02-17 03:58:56.734530 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 03:58:56.734543 | orchestrator | Tuesday 17 February 2026 03:58:56 +0000 (0:00:05.281) 0:01:02.889 ****** 2026-02-17 03:58:56.734565 | orchestrator | =============================================================================== 2026-02-17 03:58:56.734577 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.12s 2026-02-17 03:58:56.734609 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.51s 2026-02-17 03:58:56.734623 | orchestrator | placement : Restart placement-api container ----------------------------- 5.28s 2026-02-17 03:58:56.734636 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.45s 2026-02-17 03:58:56.734648 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.76s 2026-02-17 03:58:56.734658 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.62s 2026-02-17 03:58:56.734669 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.56s 2026-02-17 03:58:56.734680 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.15s 2026-02-17 03:58:56.734691 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.24s 2026-02-17 03:58:56.734701 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.10s 2026-02-17 03:58:56.734712 | orchestrator | placement : Creating placement databases -------------------------------- 1.99s 2026-02-17 03:58:56.734722 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.60s 2026-02-17 03:58:56.734733 | orchestrator | placement : Copying over config.json files for services ----------------- 1.59s 2026-02-17 03:58:56.734744 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.41s 2026-02-17 03:58:56.734754 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2026-02-17 03:58:56.734765 | orchestrator | placement : Check placement containers ---------------------------------- 1.04s 2026-02-17 03:58:56.734775 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.03s 2026-02-17 03:58:56.734786 | orchestrator | placement : Copying over existing policy file --------------------------- 0.73s 2026-02-17 03:58:56.734797 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.67s 2026-02-17 03:58:56.734808 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2026-02-17 03:58:59.077746 | orchestrator | 2026-02-17 03:58:59 | INFO  | Task e31223cf-0f66-408a-ad60-890e70691fa0 (neutron) was prepared for execution. 2026-02-17 03:58:59.077900 | orchestrator | 2026-02-17 03:58:59 | INFO  | It takes a moment until task e31223cf-0f66-408a-ad60-890e70691fa0 (neutron) has been started and output is visible here. 2026-02-17 03:59:45.950203 | orchestrator | 2026-02-17 03:59:45.950359 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 03:59:45.950390 | orchestrator | 2026-02-17 03:59:45.950409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 03:59:45.950546 | orchestrator | Tuesday 17 February 2026 03:59:03 +0000 (0:00:00.257) 0:00:00.257 ****** 2026-02-17 03:59:45.950567 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:59:45.950589 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:59:45.950609 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:59:45.950628 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:59:45.950646 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:59:45.950665 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:59:45.950685 | orchestrator | 2026-02-17 03:59:45.950705 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 03:59:45.950725 | orchestrator | Tuesday 17 February 2026 03:59:03 +0000 (0:00:00.736) 0:00:00.994 ****** 2026-02-17 03:59:45.950745 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-17 03:59:45.950765 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-17 03:59:45.950783 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-17 03:59:45.950803 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-17 03:59:45.950822 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-17 03:59:45.950874 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-17 03:59:45.950928 | orchestrator | 2026-02-17 03:59:45.950947 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-17 03:59:45.950966 | orchestrator | 2026-02-17 03:59:45.950986 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-17 03:59:45.951005 | orchestrator | Tuesday 17 February 2026 03:59:04 +0000 (0:00:00.610) 0:00:01.604 ****** 2026-02-17 03:59:45.951043 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:59:45.951064 | orchestrator | 2026-02-17 03:59:45.951082 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-17 03:59:45.951101 | orchestrator | Tuesday 17 February 2026 03:59:05 +0000 (0:00:01.230) 0:00:02.835 ****** 2026-02-17 03:59:45.951120 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:59:45.951139 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:59:45.951159 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:59:45.951176 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:59:45.951195 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:59:45.951213 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:59:45.951232 | orchestrator | 2026-02-17 03:59:45.951252 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-17 03:59:45.951272 | orchestrator | Tuesday 17 February 2026 03:59:07 +0000 (0:00:01.264) 0:00:04.099 ****** 2026-02-17 03:59:45.951291 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:59:45.951310 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:59:45.951330 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:59:45.951349 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:59:45.951369 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:59:45.951389 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:59:45.951409 | orchestrator | 2026-02-17 03:59:45.951428 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-17 03:59:45.951444 | orchestrator | Tuesday 17 February 2026 03:59:08 +0000 (0:00:01.040) 0:00:05.140 ****** 2026-02-17 03:59:45.951460 | orchestrator | ok: [testbed-node-0] => { 2026-02-17 03:59:45.951478 | orchestrator |  "changed": false, 2026-02-17 03:59:45.951495 | orchestrator |  "msg": "All assertions passed" 2026-02-17 03:59:45.951512 | orchestrator | } 2026-02-17 03:59:45.951530 | orchestrator | ok: [testbed-node-1] => { 2026-02-17 03:59:45.951547 | orchestrator |  "changed": false, 2026-02-17 03:59:45.951567 | orchestrator |  "msg": "All assertions passed" 2026-02-17 03:59:45.951586 | orchestrator | } 2026-02-17 03:59:45.951606 | orchestrator | ok: [testbed-node-2] => { 2026-02-17 03:59:45.951626 | orchestrator |  "changed": false, 2026-02-17 03:59:45.951645 | orchestrator |  "msg": "All assertions passed" 2026-02-17 03:59:45.951663 | orchestrator | } 2026-02-17 03:59:45.951682 | orchestrator | ok: [testbed-node-3] => { 2026-02-17 03:59:45.951700 | orchestrator |  "changed": false, 2026-02-17 03:59:45.951716 | orchestrator |  "msg": "All assertions passed" 2026-02-17 03:59:45.951734 | orchestrator | } 2026-02-17 03:59:45.951751 | orchestrator | ok: [testbed-node-4] => { 2026-02-17 03:59:45.951768 | orchestrator |  "changed": false, 2026-02-17 03:59:45.951786 | orchestrator |  "msg": "All assertions passed" 2026-02-17 03:59:45.951803 | orchestrator | } 2026-02-17 03:59:45.951821 | orchestrator | ok: [testbed-node-5] => { 2026-02-17 03:59:45.951836 | orchestrator |  "changed": false, 2026-02-17 03:59:45.951853 | orchestrator |  "msg": "All assertions passed" 2026-02-17 03:59:45.951869 | orchestrator | } 2026-02-17 03:59:45.951917 | orchestrator | 2026-02-17 03:59:45.951937 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-17 03:59:45.951954 | orchestrator | Tuesday 17 February 2026 03:59:08 +0000 (0:00:00.797) 0:00:05.937 ****** 2026-02-17 03:59:45.951972 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:59:45.951989 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:59:45.952005 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:59:45.952040 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:59:45.952057 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:59:45.952074 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:59:45.952091 | orchestrator | 2026-02-17 03:59:45.952108 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-17 03:59:45.952125 | orchestrator | Tuesday 17 February 2026 03:59:09 +0000 (0:00:00.607) 0:00:06.545 ****** 2026-02-17 03:59:45.952141 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-17 03:59:45.952158 | orchestrator | 2026-02-17 03:59:45.952174 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-17 03:59:45.952193 | orchestrator | Tuesday 17 February 2026 03:59:13 +0000 (0:00:03.510) 0:00:10.055 ****** 2026-02-17 03:59:45.952210 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-17 03:59:45.952230 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-17 03:59:45.952247 | orchestrator | 2026-02-17 03:59:45.952296 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-17 03:59:45.952317 | orchestrator | Tuesday 17 February 2026 03:59:19 +0000 (0:00:06.277) 0:00:16.332 ****** 2026-02-17 03:59:45.952332 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 03:59:45.952349 | orchestrator | 2026-02-17 03:59:45.952365 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-17 03:59:45.952382 | orchestrator | Tuesday 17 February 2026 03:59:22 +0000 (0:00:03.064) 0:00:19.397 ****** 2026-02-17 03:59:45.952398 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 03:59:45.952414 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-17 03:59:45.952430 | orchestrator | 2026-02-17 03:59:45.952446 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-17 03:59:45.952461 | orchestrator | Tuesday 17 February 2026 03:59:26 +0000 (0:00:03.863) 0:00:23.260 ****** 2026-02-17 03:59:45.952476 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 03:59:45.952491 | orchestrator | 2026-02-17 03:59:45.952506 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-17 03:59:45.952521 | orchestrator | Tuesday 17 February 2026 03:59:29 +0000 (0:00:03.251) 0:00:26.512 ****** 2026-02-17 03:59:45.952538 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-17 03:59:45.952554 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-17 03:59:45.952571 | orchestrator | 2026-02-17 03:59:45.952587 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-17 03:59:45.952602 | orchestrator | Tuesday 17 February 2026 03:59:37 +0000 (0:00:07.523) 0:00:34.035 ****** 2026-02-17 03:59:45.952616 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:59:45.952632 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:59:45.952648 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:59:45.952664 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:59:45.952693 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:59:45.952708 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:59:45.952724 | orchestrator | 2026-02-17 03:59:45.952739 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-17 03:59:45.952756 | orchestrator | Tuesday 17 February 2026 03:59:37 +0000 (0:00:00.796) 0:00:34.832 ****** 2026-02-17 03:59:45.952772 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:59:45.952787 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:59:45.952802 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:59:45.952819 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:59:45.952835 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:59:45.952851 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:59:45.952867 | orchestrator | 2026-02-17 03:59:45.952979 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-17 03:59:45.953012 | orchestrator | Tuesday 17 February 2026 03:59:39 +0000 (0:00:02.129) 0:00:36.962 ****** 2026-02-17 03:59:45.953029 | orchestrator | ok: [testbed-node-0] 2026-02-17 03:59:45.953046 | orchestrator | ok: [testbed-node-1] 2026-02-17 03:59:45.953062 | orchestrator | ok: [testbed-node-2] 2026-02-17 03:59:45.953079 | orchestrator | ok: [testbed-node-3] 2026-02-17 03:59:45.953089 | orchestrator | ok: [testbed-node-4] 2026-02-17 03:59:45.953098 | orchestrator | ok: [testbed-node-5] 2026-02-17 03:59:45.953108 | orchestrator | 2026-02-17 03:59:45.953118 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-17 03:59:45.953127 | orchestrator | Tuesday 17 February 2026 03:59:41 +0000 (0:00:01.193) 0:00:38.155 ****** 2026-02-17 03:59:45.953137 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:59:45.953146 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:59:45.953156 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:59:45.953165 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:59:45.953175 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:59:45.953184 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:59:45.953194 | orchestrator | 2026-02-17 03:59:45.953203 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-17 03:59:45.953213 | orchestrator | Tuesday 17 February 2026 03:59:43 +0000 (0:00:02.204) 0:00:40.360 ****** 2026-02-17 03:59:45.953226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:59:45.953259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:59:51.231554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:59:51.231705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 03:59:51.231726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 03:59:51.231738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 03:59:51.231751 | orchestrator | 2026-02-17 03:59:51.231765 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-17 03:59:51.231779 | orchestrator | Tuesday 17 February 2026 03:59:45 +0000 (0:00:02.570) 0:00:42.931 ****** 2026-02-17 03:59:51.231791 | orchestrator | [WARNING]: Skipped 2026-02-17 03:59:51.231805 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-17 03:59:51.231819 | orchestrator | due to this access issue: 2026-02-17 03:59:51.231833 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-17 03:59:51.231845 | orchestrator | a directory 2026-02-17 03:59:51.231856 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 03:59:51.231868 | orchestrator | 2026-02-17 03:59:51.231909 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-17 03:59:51.231922 | orchestrator | Tuesday 17 February 2026 03:59:46 +0000 (0:00:00.790) 0:00:43.722 ****** 2026-02-17 03:59:51.231934 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 03:59:51.231946 | orchestrator | 2026-02-17 03:59:51.231956 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-17 03:59:51.231984 | orchestrator | Tuesday 17 February 2026 03:59:47 +0000 (0:00:01.242) 0:00:44.964 ****** 2026-02-17 03:59:51.231995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 03:59:51.232023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:59:51.232037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:59:51.232049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 03:59:51.232069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 03:59:55.867974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 03:59:55.868060 | orchestrator | 2026-02-17 03:59:55.868071 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-17 03:59:55.868080 | orchestrator | Tuesday 17 February 2026 03:59:51 +0000 (0:00:03.246) 0:00:48.210 ****** 2026-02-17 03:59:55.868089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 03:59:55.868098 | orchestrator | skipping: [testbed-node-2] 2026-02-17 03:59:55.868107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 03:59:55.868114 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:59:55.868121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 03:59:55.868128 | orchestrator | skipping: [testbed-node-0] 2026-02-17 03:59:55.868166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:59:55.868175 | orchestrator | skipping: [testbed-node-3] 2026-02-17 03:59:55.868186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:59:55.868194 | orchestrator | skipping: [testbed-node-4] 2026-02-17 03:59:55.868201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 03:59:55.868208 | orchestrator | skipping: [testbed-node-5] 2026-02-17 03:59:55.868215 | orchestrator | 2026-02-17 03:59:55.868222 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-17 03:59:55.868229 | orchestrator | Tuesday 17 February 2026 03:59:53 +0000 (0:00:01.911) 0:00:50.122 ****** 2026-02-17 03:59:55.868236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 03:59:55.868243 | orchestrator | skipping: [testbed-node-1] 2026-02-17 03:59:55.868255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:00.916914 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:00.917048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:00.917069 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:00.917082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:00.917096 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:00.917108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:00.917120 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:00.917132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:00.917166 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:00.917179 | orchestrator | 2026-02-17 04:00:00.917192 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-17 04:00:00.917204 | orchestrator | Tuesday 17 February 2026 03:59:55 +0000 (0:00:02.728) 0:00:52.851 ****** 2026-02-17 04:00:00.917215 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:00.917227 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:00.917238 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:00.917249 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:00.917259 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:00.917270 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:00.917281 | orchestrator | 2026-02-17 04:00:00.917293 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-17 04:00:00.917304 | orchestrator | Tuesday 17 February 2026 03:59:58 +0000 (0:00:02.228) 0:00:55.079 ****** 2026-02-17 04:00:00.917316 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:00.917327 | orchestrator | 2026-02-17 04:00:00.917338 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-17 04:00:00.917366 | orchestrator | Tuesday 17 February 2026 03:59:58 +0000 (0:00:00.143) 0:00:55.223 ****** 2026-02-17 04:00:00.917379 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:00.917390 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:00.917401 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:00.917412 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:00.917423 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:00.917434 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:00.917445 | orchestrator | 2026-02-17 04:00:00.917456 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-17 04:00:00.917467 | orchestrator | Tuesday 17 February 2026 03:59:58 +0000 (0:00:00.576) 0:00:55.800 ****** 2026-02-17 04:00:00.917485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:00.917497 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:00.917509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:00.917531 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:00.917543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:00.917555 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:00.917566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:00.917578 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:00.917606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:08.995269 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:08.995383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:08.995403 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:08.995415 | orchestrator | 2026-02-17 04:00:08.995427 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-17 04:00:08.995440 | orchestrator | Tuesday 17 February 2026 04:00:00 +0000 (0:00:02.091) 0:00:57.891 ****** 2026-02-17 04:00:08.995453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:08.995490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:08.995503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:08.995579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:00:08.995620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:00:08.995643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:00:08.995655 | orchestrator | 2026-02-17 04:00:08.995666 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-17 04:00:08.995678 | orchestrator | Tuesday 17 February 2026 04:00:03 +0000 (0:00:03.040) 0:01:00.931 ****** 2026-02-17 04:00:08.995689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:08.995701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:08.995728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:14.085680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:00:14.085804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:00:14.085817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:00:14.085826 | orchestrator | 2026-02-17 04:00:14.085836 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-17 04:00:14.085845 | orchestrator | Tuesday 17 February 2026 04:00:08 +0000 (0:00:05.044) 0:01:05.976 ****** 2026-02-17 04:00:14.085854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:14.085876 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:14.085929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:14.085947 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:14.085955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:14.085963 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:14.085972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:14.085980 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:14.085988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:14.085996 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:14.086009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:14.086059 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:14.086068 | orchestrator | 2026-02-17 04:00:14.086077 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-17 04:00:14.086092 | orchestrator | Tuesday 17 February 2026 04:00:11 +0000 (0:00:02.235) 0:01:08.211 ****** 2026-02-17 04:00:14.086100 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:14.086109 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:14.086117 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:14.086125 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:00:14.086133 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:00:14.086141 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:00:14.086149 | orchestrator | 2026-02-17 04:00:14.086157 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-17 04:00:14.086171 | orchestrator | Tuesday 17 February 2026 04:00:14 +0000 (0:00:02.856) 0:01:11.068 ****** 2026-02-17 04:00:32.193017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:32.193150 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:32.193180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:32.193202 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:32.193222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:32.193243 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:32.193262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:32.193343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:32.193359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:00:32.193371 | orchestrator | 2026-02-17 04:00:32.193383 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-17 04:00:32.193395 | orchestrator | Tuesday 17 February 2026 04:00:17 +0000 (0:00:03.363) 0:01:14.431 ****** 2026-02-17 04:00:32.193407 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:32.193418 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:32.193429 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:32.193440 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:32.193451 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:32.193462 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:32.193473 | orchestrator | 2026-02-17 04:00:32.193484 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-17 04:00:32.193495 | orchestrator | Tuesday 17 February 2026 04:00:19 +0000 (0:00:02.166) 0:01:16.597 ****** 2026-02-17 04:00:32.193506 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:32.193517 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:32.193530 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:32.193542 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:32.193554 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:32.193567 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:32.193579 | orchestrator | 2026-02-17 04:00:32.193592 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-17 04:00:32.193604 | orchestrator | Tuesday 17 February 2026 04:00:21 +0000 (0:00:02.114) 0:01:18.712 ****** 2026-02-17 04:00:32.193617 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:32.193630 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:32.193642 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:32.193655 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:32.193667 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:32.193680 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:32.193690 | orchestrator | 2026-02-17 04:00:32.193702 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-17 04:00:32.193721 | orchestrator | Tuesday 17 February 2026 04:00:23 +0000 (0:00:02.043) 0:01:20.756 ****** 2026-02-17 04:00:32.193732 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:32.193743 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:32.193754 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:32.193765 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:32.193776 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:32.193786 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:32.193797 | orchestrator | 2026-02-17 04:00:32.193808 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-17 04:00:32.193819 | orchestrator | Tuesday 17 February 2026 04:00:25 +0000 (0:00:02.113) 0:01:22.869 ****** 2026-02-17 04:00:32.193830 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:32.193841 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:32.193852 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:32.193862 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:32.193873 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:32.193884 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:32.193944 | orchestrator | 2026-02-17 04:00:32.193966 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-17 04:00:32.193999 | orchestrator | Tuesday 17 February 2026 04:00:28 +0000 (0:00:02.216) 0:01:25.085 ****** 2026-02-17 04:00:32.194085 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:32.194099 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:32.194110 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:32.194121 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:32.194175 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:32.194187 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:32.194197 | orchestrator | 2026-02-17 04:00:32.194209 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-17 04:00:32.194220 | orchestrator | Tuesday 17 February 2026 04:00:30 +0000 (0:00:01.987) 0:01:27.073 ****** 2026-02-17 04:00:32.194235 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-17 04:00:32.194254 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:32.194273 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-17 04:00:32.194291 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:32.194308 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-17 04:00:32.194327 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:32.194347 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-17 04:00:32.194379 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-17 04:00:36.284892 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:36.285046 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:36.285062 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-17 04:00:36.285075 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:36.285086 | orchestrator | 2026-02-17 04:00:36.285098 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-17 04:00:36.285195 | orchestrator | Tuesday 17 February 2026 04:00:32 +0000 (0:00:02.095) 0:01:29.168 ****** 2026-02-17 04:00:36.285210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:36.285253 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:36.285267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:36.285278 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:00:36.285290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:36.285301 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:36.285328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:36.285341 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:00:36.285372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:36.285392 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:00:36.285404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:00:36.285415 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:00:36.285429 | orchestrator | 2026-02-17 04:00:36.285442 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-17 04:00:36.285455 | orchestrator | Tuesday 17 February 2026 04:00:34 +0000 (0:00:02.022) 0:01:31.190 ****** 2026-02-17 04:00:36.285468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:36.285482 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:00:36.285500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:00:36.285513 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:00:36.285535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:01:01.727717 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.727841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:01:01.727861 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.727874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:01:01.727886 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.727898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:01:01.727910 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.727921 | orchestrator | 2026-02-17 04:01:01.727933 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-17 04:01:01.727992 | orchestrator | Tuesday 17 February 2026 04:00:36 +0000 (0:00:02.078) 0:01:33.268 ****** 2026-02-17 04:01:01.728005 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.728017 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.728028 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.728039 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.728051 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.728062 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.728075 | orchestrator | 2026-02-17 04:01:01.728113 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-17 04:01:01.728134 | orchestrator | Tuesday 17 February 2026 04:00:38 +0000 (0:00:02.199) 0:01:35.468 ****** 2026-02-17 04:01:01.728151 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.728168 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.728185 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.728203 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:01:01.728219 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:01:01.728237 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:01:01.728256 | orchestrator | 2026-02-17 04:01:01.728275 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-17 04:01:01.728328 | orchestrator | Tuesday 17 February 2026 04:00:42 +0000 (0:00:03.685) 0:01:39.154 ****** 2026-02-17 04:01:01.728349 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.728369 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.728388 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.728407 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.728425 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.728443 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.728461 | orchestrator | 2026-02-17 04:01:01.728481 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-17 04:01:01.728501 | orchestrator | Tuesday 17 February 2026 04:00:44 +0000 (0:00:02.232) 0:01:41.387 ****** 2026-02-17 04:01:01.728520 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.728538 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.728557 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.728574 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.728592 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.728610 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.728629 | orchestrator | 2026-02-17 04:01:01.728649 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-17 04:01:01.728692 | orchestrator | Tuesday 17 February 2026 04:00:46 +0000 (0:00:02.147) 0:01:43.534 ****** 2026-02-17 04:01:01.728713 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.728732 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.728750 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.728769 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.728788 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.728806 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.728825 | orchestrator | 2026-02-17 04:01:01.728844 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-17 04:01:01.728863 | orchestrator | Tuesday 17 February 2026 04:00:48 +0000 (0:00:02.207) 0:01:45.741 ****** 2026-02-17 04:01:01.728881 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.728899 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.728916 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.728933 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.728976 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.728997 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.729015 | orchestrator | 2026-02-17 04:01:01.729034 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-17 04:01:01.729052 | orchestrator | Tuesday 17 February 2026 04:00:51 +0000 (0:00:02.304) 0:01:48.046 ****** 2026-02-17 04:01:01.729071 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.729090 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.729108 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.729126 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.729145 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.729162 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.729181 | orchestrator | 2026-02-17 04:01:01.729199 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-17 04:01:01.729218 | orchestrator | Tuesday 17 February 2026 04:00:53 +0000 (0:00:02.076) 0:01:50.122 ****** 2026-02-17 04:01:01.729234 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.729250 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.729268 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.729285 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.729304 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.729324 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.729340 | orchestrator | 2026-02-17 04:01:01.729358 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-17 04:01:01.729373 | orchestrator | Tuesday 17 February 2026 04:00:55 +0000 (0:00:02.157) 0:01:52.280 ****** 2026-02-17 04:01:01.729390 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.729424 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.729441 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.729458 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.729476 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.729494 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.729511 | orchestrator | 2026-02-17 04:01:01.729529 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-17 04:01:01.729545 | orchestrator | Tuesday 17 February 2026 04:00:57 +0000 (0:00:02.251) 0:01:54.532 ****** 2026-02-17 04:01:01.729564 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-17 04:01:01.729582 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.729600 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-17 04:01:01.729618 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:01.729636 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-17 04:01:01.729653 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:01.729670 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-17 04:01:01.729689 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:01.729707 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-17 04:01:01.729725 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:01.729744 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-17 04:01:01.729773 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:01.729792 | orchestrator | 2026-02-17 04:01:01.729810 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-17 04:01:01.729828 | orchestrator | Tuesday 17 February 2026 04:00:59 +0000 (0:00:01.886) 0:01:56.418 ****** 2026-02-17 04:01:01.729848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:01:01.729869 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:01:01.729908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:01:04.120034 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:01:04.120163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-17 04:01:04.120183 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:01:04.120197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:01:04.120210 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:01:04.120236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:01:04.120249 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:01:04.120260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 04:01:04.120271 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:01:04.120283 | orchestrator | 2026-02-17 04:01:04.120295 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-17 04:01:04.120307 | orchestrator | Tuesday 17 February 2026 04:01:01 +0000 (0:00:02.285) 0:01:58.704 ****** 2026-02-17 04:01:04.120337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:01:04.120359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:01:04.120376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-17 04:01:04.120389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:01:04.120400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:01:04.120426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-17 04:03:24.022696 | orchestrator | 2026-02-17 04:03:24.022816 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-17 04:03:24.022835 | orchestrator | Tuesday 17 February 2026 04:01:04 +0000 (0:00:02.395) 0:02:01.100 ****** 2026-02-17 04:03:24.022847 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:03:24.022860 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:03:24.022871 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:03:24.022883 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:03:24.022894 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:03:24.022905 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:03:24.022916 | orchestrator | 2026-02-17 04:03:24.022928 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-17 04:03:24.022939 | orchestrator | Tuesday 17 February 2026 04:01:04 +0000 (0:00:00.792) 0:02:01.893 ****** 2026-02-17 04:03:24.022950 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:03:24.022961 | orchestrator | 2026-02-17 04:03:24.022972 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-17 04:03:24.022984 | orchestrator | Tuesday 17 February 2026 04:01:06 +0000 (0:00:02.078) 0:02:03.971 ****** 2026-02-17 04:03:24.022995 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:03:24.023006 | orchestrator | 2026-02-17 04:03:24.023017 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-17 04:03:24.023028 | orchestrator | Tuesday 17 February 2026 04:01:09 +0000 (0:00:02.187) 0:02:06.159 ****** 2026-02-17 04:03:24.023039 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:03:24.023051 | orchestrator | 2026-02-17 04:03:24.023062 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-17 04:03:24.023073 | orchestrator | Tuesday 17 February 2026 04:01:49 +0000 (0:00:40.542) 0:02:46.701 ****** 2026-02-17 04:03:24.023132 | orchestrator | 2026-02-17 04:03:24.023156 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-17 04:03:24.023168 | orchestrator | Tuesday 17 February 2026 04:01:49 +0000 (0:00:00.070) 0:02:46.772 ****** 2026-02-17 04:03:24.023191 | orchestrator | 2026-02-17 04:03:24.023202 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-17 04:03:24.023213 | orchestrator | Tuesday 17 February 2026 04:01:49 +0000 (0:00:00.070) 0:02:46.842 ****** 2026-02-17 04:03:24.023224 | orchestrator | 2026-02-17 04:03:24.023238 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-17 04:03:24.023250 | orchestrator | Tuesday 17 February 2026 04:01:49 +0000 (0:00:00.068) 0:02:46.911 ****** 2026-02-17 04:03:24.023262 | orchestrator | 2026-02-17 04:03:24.023292 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-17 04:03:24.023306 | orchestrator | Tuesday 17 February 2026 04:01:49 +0000 (0:00:00.065) 0:02:46.977 ****** 2026-02-17 04:03:24.023319 | orchestrator | 2026-02-17 04:03:24.023332 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-17 04:03:24.023345 | orchestrator | Tuesday 17 February 2026 04:01:50 +0000 (0:00:00.067) 0:02:47.044 ****** 2026-02-17 04:03:24.023357 | orchestrator | 2026-02-17 04:03:24.023371 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-17 04:03:24.023384 | orchestrator | Tuesday 17 February 2026 04:01:50 +0000 (0:00:00.071) 0:02:47.116 ****** 2026-02-17 04:03:24.023420 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:03:24.023433 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:03:24.023446 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:03:24.023464 | orchestrator | 2026-02-17 04:03:24.023484 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-17 04:03:24.023501 | orchestrator | Tuesday 17 February 2026 04:02:18 +0000 (0:00:28.837) 0:03:15.954 ****** 2026-02-17 04:03:24.023522 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:03:24.023545 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:03:24.023565 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:03:24.023579 | orchestrator | 2026-02-17 04:03:24.023593 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:03:24.023605 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-17 04:03:24.023617 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-17 04:03:24.023629 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-17 04:03:24.023640 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-17 04:03:24.023652 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-17 04:03:24.023663 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-17 04:03:24.023674 | orchestrator | 2026-02-17 04:03:24.023685 | orchestrator | 2026-02-17 04:03:24.023696 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:03:24.023708 | orchestrator | Tuesday 17 February 2026 04:03:23 +0000 (0:01:04.565) 0:04:20.519 ****** 2026-02-17 04:03:24.023719 | orchestrator | =============================================================================== 2026-02-17 04:03:24.023730 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 64.57s 2026-02-17 04:03:24.023740 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.54s 2026-02-17 04:03:24.023751 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.84s 2026-02-17 04:03:24.023781 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.52s 2026-02-17 04:03:24.023792 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.28s 2026-02-17 04:03:24.023803 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.04s 2026-02-17 04:03:24.023814 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.86s 2026-02-17 04:03:24.023825 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.69s 2026-02-17 04:03:24.023836 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.51s 2026-02-17 04:03:24.023847 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.36s 2026-02-17 04:03:24.023858 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.25s 2026-02-17 04:03:24.023869 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.25s 2026-02-17 04:03:24.023880 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.06s 2026-02-17 04:03:24.023891 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.04s 2026-02-17 04:03:24.023902 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.86s 2026-02-17 04:03:24.023913 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.73s 2026-02-17 04:03:24.023934 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.57s 2026-02-17 04:03:24.023945 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.40s 2026-02-17 04:03:24.023956 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 2.30s 2026-02-17 04:03:24.023968 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.29s 2026-02-17 04:03:26.330300 | orchestrator | 2026-02-17 04:03:26 | INFO  | Task 5728f8a9-7c22-4bac-a3cd-3d3a541cdbec (nova) was prepared for execution. 2026-02-17 04:03:26.330400 | orchestrator | 2026-02-17 04:03:26 | INFO  | It takes a moment until task 5728f8a9-7c22-4bac-a3cd-3d3a541cdbec (nova) has been started and output is visible here. 2026-02-17 04:05:20.673450 | orchestrator | 2026-02-17 04:05:20.673591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:05:20.673611 | orchestrator | 2026-02-17 04:05:20.673623 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-17 04:05:20.673636 | orchestrator | Tuesday 17 February 2026 04:03:30 +0000 (0:00:00.272) 0:00:00.272 ****** 2026-02-17 04:05:20.673647 | orchestrator | changed: [testbed-manager] 2026-02-17 04:05:20.673660 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.673671 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:05:20.673681 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:05:20.673692 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:05:20.673703 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:05:20.673714 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:05:20.673725 | orchestrator | 2026-02-17 04:05:20.673736 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:05:20.673747 | orchestrator | Tuesday 17 February 2026 04:03:31 +0000 (0:00:00.845) 0:00:01.118 ****** 2026-02-17 04:05:20.673758 | orchestrator | changed: [testbed-manager] 2026-02-17 04:05:20.673769 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.673780 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:05:20.673791 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:05:20.673801 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:05:20.673812 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:05:20.673824 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:05:20.673834 | orchestrator | 2026-02-17 04:05:20.673846 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:05:20.673857 | orchestrator | Tuesday 17 February 2026 04:03:32 +0000 (0:00:00.839) 0:00:01.957 ****** 2026-02-17 04:05:20.673868 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-17 04:05:20.673879 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-17 04:05:20.673890 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-17 04:05:20.673901 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-17 04:05:20.673912 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-17 04:05:20.673922 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-17 04:05:20.673935 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-17 04:05:20.673949 | orchestrator | 2026-02-17 04:05:20.673962 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-17 04:05:20.673975 | orchestrator | 2026-02-17 04:05:20.673987 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-17 04:05:20.674000 | orchestrator | Tuesday 17 February 2026 04:03:33 +0000 (0:00:00.711) 0:00:02.668 ****** 2026-02-17 04:05:20.674013 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:05:20.674094 | orchestrator | 2026-02-17 04:05:20.674108 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-17 04:05:20.674121 | orchestrator | Tuesday 17 February 2026 04:03:33 +0000 (0:00:00.729) 0:00:03.398 ****** 2026-02-17 04:05:20.674134 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-17 04:05:20.674178 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-17 04:05:20.674191 | orchestrator | 2026-02-17 04:05:20.674202 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-17 04:05:20.674274 | orchestrator | Tuesday 17 February 2026 04:03:37 +0000 (0:00:03.975) 0:00:07.373 ****** 2026-02-17 04:05:20.674286 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 04:05:20.674297 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 04:05:20.674308 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.674319 | orchestrator | 2026-02-17 04:05:20.674330 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-17 04:05:20.674340 | orchestrator | Tuesday 17 February 2026 04:03:41 +0000 (0:00:03.958) 0:00:11.332 ****** 2026-02-17 04:05:20.674351 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.674362 | orchestrator | 2026-02-17 04:05:20.674373 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-17 04:05:20.674384 | orchestrator | Tuesday 17 February 2026 04:03:42 +0000 (0:00:00.592) 0:00:11.924 ****** 2026-02-17 04:05:20.674395 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.674405 | orchestrator | 2026-02-17 04:05:20.674416 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-17 04:05:20.674427 | orchestrator | Tuesday 17 February 2026 04:03:43 +0000 (0:00:01.220) 0:00:13.145 ****** 2026-02-17 04:05:20.674438 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.674448 | orchestrator | 2026-02-17 04:05:20.674459 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-17 04:05:20.674470 | orchestrator | Tuesday 17 February 2026 04:03:46 +0000 (0:00:02.561) 0:00:15.707 ****** 2026-02-17 04:05:20.674481 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:05:20.674492 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.674502 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.674513 | orchestrator | 2026-02-17 04:05:20.674524 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-17 04:05:20.674535 | orchestrator | Tuesday 17 February 2026 04:03:46 +0000 (0:00:00.296) 0:00:16.003 ****** 2026-02-17 04:05:20.674546 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:05:20.674557 | orchestrator | 2026-02-17 04:05:20.674568 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-17 04:05:20.674578 | orchestrator | Tuesday 17 February 2026 04:04:17 +0000 (0:00:30.861) 0:00:46.864 ****** 2026-02-17 04:05:20.674589 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.674600 | orchestrator | 2026-02-17 04:05:20.674610 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-17 04:05:20.674621 | orchestrator | Tuesday 17 February 2026 04:04:31 +0000 (0:00:14.473) 0:01:01.338 ****** 2026-02-17 04:05:20.674632 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:05:20.674643 | orchestrator | 2026-02-17 04:05:20.674653 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-17 04:05:20.674664 | orchestrator | Tuesday 17 February 2026 04:04:43 +0000 (0:00:11.881) 0:01:13.220 ****** 2026-02-17 04:05:20.674696 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:05:20.674707 | orchestrator | 2026-02-17 04:05:20.674726 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-17 04:05:20.674737 | orchestrator | Tuesday 17 February 2026 04:04:44 +0000 (0:00:00.710) 0:01:13.930 ****** 2026-02-17 04:05:20.674748 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:05:20.674759 | orchestrator | 2026-02-17 04:05:20.674770 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-17 04:05:20.674780 | orchestrator | Tuesday 17 February 2026 04:04:44 +0000 (0:00:00.473) 0:01:14.403 ****** 2026-02-17 04:05:20.674792 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:05:20.674803 | orchestrator | 2026-02-17 04:05:20.674814 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-17 04:05:20.674834 | orchestrator | Tuesday 17 February 2026 04:04:45 +0000 (0:00:00.705) 0:01:15.109 ****** 2026-02-17 04:05:20.674845 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:05:20.674856 | orchestrator | 2026-02-17 04:05:20.674867 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-17 04:05:20.674877 | orchestrator | Tuesday 17 February 2026 04:05:02 +0000 (0:00:17.498) 0:01:32.607 ****** 2026-02-17 04:05:20.674888 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:05:20.674899 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.674910 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.674920 | orchestrator | 2026-02-17 04:05:20.674931 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-17 04:05:20.674942 | orchestrator | 2026-02-17 04:05:20.674952 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-17 04:05:20.674963 | orchestrator | Tuesday 17 February 2026 04:05:03 +0000 (0:00:00.352) 0:01:32.960 ****** 2026-02-17 04:05:20.674986 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:05:20.674997 | orchestrator | 2026-02-17 04:05:20.675008 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-17 04:05:20.675019 | orchestrator | Tuesday 17 February 2026 04:05:04 +0000 (0:00:00.766) 0:01:33.726 ****** 2026-02-17 04:05:20.675029 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675040 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675051 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.675062 | orchestrator | 2026-02-17 04:05:20.675072 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-17 04:05:20.675083 | orchestrator | Tuesday 17 February 2026 04:05:06 +0000 (0:00:02.008) 0:01:35.735 ****** 2026-02-17 04:05:20.675094 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675104 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675115 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.675126 | orchestrator | 2026-02-17 04:05:20.675137 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-17 04:05:20.675147 | orchestrator | Tuesday 17 February 2026 04:05:08 +0000 (0:00:02.060) 0:01:37.795 ****** 2026-02-17 04:05:20.675158 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:05:20.675169 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675179 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675190 | orchestrator | 2026-02-17 04:05:20.675201 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-17 04:05:20.675230 | orchestrator | Tuesday 17 February 2026 04:05:08 +0000 (0:00:00.476) 0:01:38.272 ****** 2026-02-17 04:05:20.675241 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-17 04:05:20.675252 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675262 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-17 04:05:20.675273 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675284 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-17 04:05:20.675294 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-17 04:05:20.675305 | orchestrator | 2026-02-17 04:05:20.675316 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-17 04:05:20.675327 | orchestrator | Tuesday 17 February 2026 04:05:15 +0000 (0:00:06.748) 0:01:45.020 ****** 2026-02-17 04:05:20.675337 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:05:20.675348 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675359 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675370 | orchestrator | 2026-02-17 04:05:20.675380 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-17 04:05:20.675391 | orchestrator | Tuesday 17 February 2026 04:05:15 +0000 (0:00:00.354) 0:01:45.374 ****** 2026-02-17 04:05:20.675402 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-17 04:05:20.675412 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:05:20.675430 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-17 04:05:20.675441 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675451 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-17 04:05:20.675462 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675473 | orchestrator | 2026-02-17 04:05:20.675483 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-17 04:05:20.675494 | orchestrator | Tuesday 17 February 2026 04:05:16 +0000 (0:00:01.088) 0:01:46.462 ****** 2026-02-17 04:05:20.675505 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675515 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675526 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.675537 | orchestrator | 2026-02-17 04:05:20.675548 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-17 04:05:20.675558 | orchestrator | Tuesday 17 February 2026 04:05:17 +0000 (0:00:00.469) 0:01:46.932 ****** 2026-02-17 04:05:20.675569 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675579 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675590 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:05:20.675601 | orchestrator | 2026-02-17 04:05:20.675612 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-17 04:05:20.675622 | orchestrator | Tuesday 17 February 2026 04:05:18 +0000 (0:00:00.967) 0:01:47.900 ****** 2026-02-17 04:05:20.675633 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:05:20.675644 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:05:20.675663 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:06:35.990537 | orchestrator | 2026-02-17 04:06:35.990732 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-17 04:06:35.990754 | orchestrator | Tuesday 17 February 2026 04:05:20 +0000 (0:00:02.419) 0:01:50.319 ****** 2026-02-17 04:06:35.990767 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:35.990780 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:35.990791 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:06:35.990804 | orchestrator | 2026-02-17 04:06:35.990815 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-17 04:06:35.990827 | orchestrator | Tuesday 17 February 2026 04:05:41 +0000 (0:00:21.242) 0:02:11.562 ****** 2026-02-17 04:06:35.990838 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:35.990849 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:35.990860 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:06:35.990871 | orchestrator | 2026-02-17 04:06:35.990882 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-17 04:06:35.990894 | orchestrator | Tuesday 17 February 2026 04:05:53 +0000 (0:00:11.820) 0:02:23.382 ****** 2026-02-17 04:06:35.990905 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:06:35.990916 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:35.990927 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:35.990938 | orchestrator | 2026-02-17 04:06:35.990949 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-17 04:06:35.990960 | orchestrator | Tuesday 17 February 2026 04:05:54 +0000 (0:00:01.069) 0:02:24.451 ****** 2026-02-17 04:06:35.990971 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:35.990983 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:35.990995 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:06:35.991006 | orchestrator | 2026-02-17 04:06:35.991017 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-17 04:06:35.991028 | orchestrator | Tuesday 17 February 2026 04:06:05 +0000 (0:00:10.937) 0:02:35.388 ****** 2026-02-17 04:06:35.991039 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:35.991050 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:35.991061 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:35.991072 | orchestrator | 2026-02-17 04:06:35.991083 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-17 04:06:35.991094 | orchestrator | Tuesday 17 February 2026 04:06:06 +0000 (0:00:01.032) 0:02:36.421 ****** 2026-02-17 04:06:35.991131 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:35.991143 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:35.991154 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:35.991165 | orchestrator | 2026-02-17 04:06:35.991176 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-17 04:06:35.991187 | orchestrator | 2026-02-17 04:06:35.991198 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-17 04:06:35.991209 | orchestrator | Tuesday 17 February 2026 04:06:07 +0000 (0:00:00.315) 0:02:36.736 ****** 2026-02-17 04:06:35.991316 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:06:35.991424 | orchestrator | 2026-02-17 04:06:35.991447 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-17 04:06:35.991466 | orchestrator | Tuesday 17 February 2026 04:06:07 +0000 (0:00:00.749) 0:02:37.485 ****** 2026-02-17 04:06:35.991485 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-17 04:06:35.991504 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-17 04:06:35.991525 | orchestrator | 2026-02-17 04:06:35.991545 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-17 04:06:35.991564 | orchestrator | Tuesday 17 February 2026 04:06:11 +0000 (0:00:03.606) 0:02:41.092 ****** 2026-02-17 04:06:35.991586 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-17 04:06:35.991609 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-17 04:06:35.991626 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-17 04:06:35.991637 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-17 04:06:35.991649 | orchestrator | 2026-02-17 04:06:35.991660 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-17 04:06:35.991672 | orchestrator | Tuesday 17 February 2026 04:06:17 +0000 (0:00:06.219) 0:02:47.311 ****** 2026-02-17 04:06:35.991683 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:06:35.991694 | orchestrator | 2026-02-17 04:06:35.991705 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-17 04:06:35.991742 | orchestrator | Tuesday 17 February 2026 04:06:20 +0000 (0:00:03.055) 0:02:50.367 ****** 2026-02-17 04:06:35.991753 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:06:35.991764 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-17 04:06:35.991775 | orchestrator | 2026-02-17 04:06:35.991786 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-17 04:06:35.991797 | orchestrator | Tuesday 17 February 2026 04:06:24 +0000 (0:00:03.664) 0:02:54.031 ****** 2026-02-17 04:06:35.991808 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:06:35.991819 | orchestrator | 2026-02-17 04:06:35.991830 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-17 04:06:35.991841 | orchestrator | Tuesday 17 February 2026 04:06:27 +0000 (0:00:03.064) 0:02:57.096 ****** 2026-02-17 04:06:35.991852 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-17 04:06:35.991863 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-17 04:06:35.991874 | orchestrator | 2026-02-17 04:06:35.991886 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-17 04:06:35.991928 | orchestrator | Tuesday 17 February 2026 04:06:34 +0000 (0:00:07.252) 0:03:04.349 ****** 2026-02-17 04:06:35.991947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:35.991981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:35.991995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:35.992056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:40.427075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:40.427188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:40.427205 | orchestrator | 2026-02-17 04:06:40.427219 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-17 04:06:40.427233 | orchestrator | Tuesday 17 February 2026 04:06:35 +0000 (0:00:01.290) 0:03:05.639 ****** 2026-02-17 04:06:40.427245 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:40.427258 | orchestrator | 2026-02-17 04:06:40.427287 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-17 04:06:40.427309 | orchestrator | Tuesday 17 February 2026 04:06:36 +0000 (0:00:00.139) 0:03:05.778 ****** 2026-02-17 04:06:40.427322 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:40.427334 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:40.427345 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:40.427363 | orchestrator | 2026-02-17 04:06:40.427379 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-17 04:06:40.427390 | orchestrator | Tuesday 17 February 2026 04:06:36 +0000 (0:00:00.316) 0:03:06.095 ****** 2026-02-17 04:06:40.427491 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:06:40.427505 | orchestrator | 2026-02-17 04:06:40.427516 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-17 04:06:40.427527 | orchestrator | Tuesday 17 February 2026 04:06:37 +0000 (0:00:00.679) 0:03:06.775 ****** 2026-02-17 04:06:40.427538 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:40.427549 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:40.427560 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:40.427571 | orchestrator | 2026-02-17 04:06:40.427582 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-17 04:06:40.427593 | orchestrator | Tuesday 17 February 2026 04:06:37 +0000 (0:00:00.531) 0:03:07.307 ****** 2026-02-17 04:06:40.427606 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:06:40.427620 | orchestrator | 2026-02-17 04:06:40.427635 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-17 04:06:40.427655 | orchestrator | Tuesday 17 February 2026 04:06:38 +0000 (0:00:00.573) 0:03:07.880 ****** 2026-02-17 04:06:40.427690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:40.427804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:40.427829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:40.427846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:40.427862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:40.427900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:40.427913 | orchestrator | 2026-02-17 04:06:40.427932 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-17 04:06:42.077987 | orchestrator | Tuesday 17 February 2026 04:06:40 +0000 (0:00:02.194) 0:03:10.075 ****** 2026-02-17 04:06:42.078156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:42.078179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:42.078193 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:42.078208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:42.078242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:42.078268 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:42.078303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:42.078317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:42.078329 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:42.078340 | orchestrator | 2026-02-17 04:06:42.078352 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-17 04:06:42.078363 | orchestrator | Tuesday 17 February 2026 04:06:41 +0000 (0:00:00.849) 0:03:10.924 ****** 2026-02-17 04:06:42.078375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:42.078420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:42.078432 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:42.078460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:44.429766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:44.429870 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:44.429890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:44.429936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:44.429958 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:44.429978 | orchestrator | 2026-02-17 04:06:44.429999 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-17 04:06:44.430088 | orchestrator | Tuesday 17 February 2026 04:06:42 +0000 (0:00:00.806) 0:03:11.731 ****** 2026-02-17 04:06:44.430134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:44.430186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:44.430210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:44.430249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:44.430279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:44.430314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:50.627835 | orchestrator | 2026-02-17 04:06:50.627968 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-17 04:06:50.627988 | orchestrator | Tuesday 17 February 2026 04:06:44 +0000 (0:00:02.350) 0:03:14.082 ****** 2026-02-17 04:06:50.628005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:50.628047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:50.628077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:50.628111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:50.628125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:50.628145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:50.628156 | orchestrator | 2026-02-17 04:06:50.628168 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-17 04:06:50.628179 | orchestrator | Tuesday 17 February 2026 04:06:49 +0000 (0:00:05.565) 0:03:19.647 ****** 2026-02-17 04:06:50.628197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:50.628209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:50.628253 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:50.628277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:54.886007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:54.886173 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:54.886192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-17 04:06:54.886223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:06:54.886236 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:54.886248 | orchestrator | 2026-02-17 04:06:54.886260 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-17 04:06:54.886272 | orchestrator | Tuesday 17 February 2026 04:06:50 +0000 (0:00:00.631) 0:03:20.279 ****** 2026-02-17 04:06:54.886284 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:06:54.886295 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:06:54.886306 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:06:54.886317 | orchestrator | 2026-02-17 04:06:54.886329 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-17 04:06:54.886340 | orchestrator | Tuesday 17 February 2026 04:06:52 +0000 (0:00:01.498) 0:03:21.778 ****** 2026-02-17 04:06:54.886351 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:06:54.886362 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:06:54.886373 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:06:54.886384 | orchestrator | 2026-02-17 04:06:54.886395 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-17 04:06:54.886406 | orchestrator | Tuesday 17 February 2026 04:06:52 +0000 (0:00:00.344) 0:03:22.122 ****** 2026-02-17 04:06:54.886485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:54.886525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:54.886549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-17 04:06:54.886564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:54.886587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:06:54.886609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:35.836436 | orchestrator | 2026-02-17 04:07:35.836630 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-17 04:07:35.837187 | orchestrator | Tuesday 17 February 2026 04:06:54 +0000 (0:00:01.974) 0:03:24.097 ****** 2026-02-17 04:07:35.837218 | orchestrator | 2026-02-17 04:07:35.837232 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-17 04:07:35.837247 | orchestrator | Tuesday 17 February 2026 04:06:54 +0000 (0:00:00.159) 0:03:24.256 ****** 2026-02-17 04:07:35.837259 | orchestrator | 2026-02-17 04:07:35.837271 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-17 04:07:35.837282 | orchestrator | Tuesday 17 February 2026 04:06:54 +0000 (0:00:00.137) 0:03:24.394 ****** 2026-02-17 04:07:35.837293 | orchestrator | 2026-02-17 04:07:35.837304 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-17 04:07:35.837316 | orchestrator | Tuesday 17 February 2026 04:06:54 +0000 (0:00:00.138) 0:03:24.533 ****** 2026-02-17 04:07:35.837327 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:07:35.837394 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:07:35.837406 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:07:35.837417 | orchestrator | 2026-02-17 04:07:35.837429 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-17 04:07:35.837440 | orchestrator | Tuesday 17 February 2026 04:07:13 +0000 (0:00:19.025) 0:03:43.559 ****** 2026-02-17 04:07:35.837451 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:07:35.837463 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:07:35.837474 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:07:35.837514 | orchestrator | 2026-02-17 04:07:35.837526 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-17 04:07:35.837537 | orchestrator | 2026-02-17 04:07:35.837548 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-17 04:07:35.837560 | orchestrator | Tuesday 17 February 2026 04:07:24 +0000 (0:00:10.395) 0:03:53.954 ****** 2026-02-17 04:07:35.837572 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:07:35.837585 | orchestrator | 2026-02-17 04:07:35.837597 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-17 04:07:35.837624 | orchestrator | Tuesday 17 February 2026 04:07:25 +0000 (0:00:01.235) 0:03:55.190 ****** 2026-02-17 04:07:35.837636 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:07:35.837647 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:07:35.837658 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:07:35.837693 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:07:35.837705 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:07:35.837715 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:07:35.837726 | orchestrator | 2026-02-17 04:07:35.837737 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-17 04:07:35.837748 | orchestrator | Tuesday 17 February 2026 04:07:26 +0000 (0:00:00.798) 0:03:55.989 ****** 2026-02-17 04:07:35.837759 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:07:35.837770 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:07:35.837781 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:07:35.837792 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:07:35.837804 | orchestrator | 2026-02-17 04:07:35.837815 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-17 04:07:35.837826 | orchestrator | Tuesday 17 February 2026 04:07:27 +0000 (0:00:00.833) 0:03:56.823 ****** 2026-02-17 04:07:35.837837 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-17 04:07:35.837849 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-17 04:07:35.837859 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-17 04:07:35.837870 | orchestrator | 2026-02-17 04:07:35.837881 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-17 04:07:35.837892 | orchestrator | Tuesday 17 February 2026 04:07:28 +0000 (0:00:00.844) 0:03:57.668 ****** 2026-02-17 04:07:35.837903 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-17 04:07:35.837914 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-17 04:07:35.837925 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-17 04:07:35.837936 | orchestrator | 2026-02-17 04:07:35.837947 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-17 04:07:35.837958 | orchestrator | Tuesday 17 February 2026 04:07:29 +0000 (0:00:01.247) 0:03:58.915 ****** 2026-02-17 04:07:35.837969 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-17 04:07:35.837980 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:07:35.837990 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-17 04:07:35.838001 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:07:35.838065 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-17 04:07:35.838078 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:07:35.838089 | orchestrator | 2026-02-17 04:07:35.838100 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-17 04:07:35.838112 | orchestrator | Tuesday 17 February 2026 04:07:29 +0000 (0:00:00.535) 0:03:59.450 ****** 2026-02-17 04:07:35.838123 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-17 04:07:35.838134 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-17 04:07:35.838145 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 04:07:35.838156 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 04:07:35.838167 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:07:35.838178 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 04:07:35.838189 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 04:07:35.838200 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:07:35.838232 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 04:07:35.838244 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 04:07:35.838255 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:07:35.838267 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-17 04:07:35.838278 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-17 04:07:35.838297 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-17 04:07:35.838308 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-17 04:07:35.838319 | orchestrator | 2026-02-17 04:07:35.838330 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-17 04:07:35.838341 | orchestrator | Tuesday 17 February 2026 04:07:31 +0000 (0:00:01.278) 0:04:00.729 ****** 2026-02-17 04:07:35.838353 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:07:35.838364 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:07:35.838375 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:07:35.838386 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:07:35.838397 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:07:35.838408 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:07:35.838419 | orchestrator | 2026-02-17 04:07:35.838430 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-17 04:07:35.838440 | orchestrator | Tuesday 17 February 2026 04:07:32 +0000 (0:00:01.229) 0:04:01.958 ****** 2026-02-17 04:07:35.838451 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:07:35.838462 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:07:35.838473 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:07:35.838528 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:07:35.838541 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:07:35.838552 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:07:35.838563 | orchestrator | 2026-02-17 04:07:35.838574 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-17 04:07:35.838585 | orchestrator | Tuesday 17 February 2026 04:07:34 +0000 (0:00:01.724) 0:04:03.682 ****** 2026-02-17 04:07:35.838605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:07:35.838624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:07:35.838644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:37.444858 | orchestrator | 2026-02-17 04:07:37.444870 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-17 04:07:37.444881 | orchestrator | Tuesday 17 February 2026 04:07:36 +0000 (0:00:02.168) 0:04:05.850 ****** 2026-02-17 04:07:37.444892 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:07:37.444903 | orchestrator | 2026-02-17 04:07:37.444913 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-17 04:07:37.444930 | orchestrator | Tuesday 17 February 2026 04:07:37 +0000 (0:00:01.241) 0:04:07.092 ****** 2026-02-17 04:07:40.620552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:40.620905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:42.666688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:42.666800 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:42.666833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:07:42.666846 | orchestrator | 2026-02-17 04:07:42.666873 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-17 04:07:42.666886 | orchestrator | Tuesday 17 February 2026 04:07:41 +0000 (0:00:03.594) 0:04:10.686 ****** 2026-02-17 04:07:42.666899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:07:42.666988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:07:42.667022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:07:42.667035 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:07:42.667053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:07:42.667065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:07:42.667077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:07:42.667096 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:07:42.667108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:07:42.667127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:07:44.261043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:07:44.261154 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:07:44.261201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:07:44.261217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:07:44.261285 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:07:44.261298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:07:44.261310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:07:44.261322 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:07:44.261333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:07:44.261364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:07:44.261376 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:07:44.261387 | orchestrator | 2026-02-17 04:07:44.261400 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-17 04:07:44.261412 | orchestrator | Tuesday 17 February 2026 04:07:42 +0000 (0:00:01.740) 0:04:12.427 ****** 2026-02-17 04:07:44.261430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:07:44.261451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:07:44.261464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:07:44.261476 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:07:44.261488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:07:44.261614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:07:48.386604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:07:48.386685 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:07:48.386710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:07:48.386716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:07:48.386721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:07:48.386727 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:07:48.386733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:07:48.386756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:07:48.386763 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:07:48.386772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:07:48.386781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:07:48.386786 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:07:48.386791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:07:48.386796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:07:48.386800 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:07:48.386805 | orchestrator | 2026-02-17 04:07:48.386811 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-17 04:07:48.386817 | orchestrator | Tuesday 17 February 2026 04:07:44 +0000 (0:00:02.016) 0:04:14.443 ****** 2026-02-17 04:07:48.386821 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:07:48.386826 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:07:48.386830 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:07:48.386836 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:07:48.386841 | orchestrator | 2026-02-17 04:07:48.386845 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-17 04:07:48.386850 | orchestrator | Tuesday 17 February 2026 04:07:45 +0000 (0:00:01.065) 0:04:15.508 ****** 2026-02-17 04:07:48.386855 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:07:48.386859 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 04:07:48.386864 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 04:07:48.386869 | orchestrator | 2026-02-17 04:07:48.386873 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-17 04:07:48.386878 | orchestrator | Tuesday 17 February 2026 04:07:46 +0000 (0:00:01.046) 0:04:16.554 ****** 2026-02-17 04:07:48.386883 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:07:48.386887 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 04:07:48.386892 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 04:07:48.386896 | orchestrator | 2026-02-17 04:07:48.386901 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-17 04:07:48.386906 | orchestrator | Tuesday 17 February 2026 04:07:47 +0000 (0:00:00.950) 0:04:17.505 ****** 2026-02-17 04:07:48.386915 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:07:48.386920 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:07:48.386924 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:07:48.386929 | orchestrator | 2026-02-17 04:07:48.386937 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-17 04:08:09.604843 | orchestrator | Tuesday 17 February 2026 04:07:48 +0000 (0:00:00.532) 0:04:18.038 ****** 2026-02-17 04:08:09.604963 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:08:09.604981 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:08:09.604993 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:08:09.605005 | orchestrator | 2026-02-17 04:08:09.605017 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-17 04:08:09.605029 | orchestrator | Tuesday 17 February 2026 04:07:48 +0000 (0:00:00.494) 0:04:18.532 ****** 2026-02-17 04:08:09.605041 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-17 04:08:09.605053 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-17 04:08:09.605064 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-17 04:08:09.605075 | orchestrator | 2026-02-17 04:08:09.605086 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-17 04:08:09.605098 | orchestrator | Tuesday 17 February 2026 04:07:50 +0000 (0:00:01.381) 0:04:19.914 ****** 2026-02-17 04:08:09.605125 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-17 04:08:09.605137 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-17 04:08:09.605149 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-17 04:08:09.605160 | orchestrator | 2026-02-17 04:08:09.605171 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-17 04:08:09.605182 | orchestrator | Tuesday 17 February 2026 04:07:51 +0000 (0:00:01.222) 0:04:21.137 ****** 2026-02-17 04:08:09.605193 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-17 04:08:09.605204 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-17 04:08:09.605215 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-17 04:08:09.605226 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-17 04:08:09.605237 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-17 04:08:09.605249 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-17 04:08:09.605260 | orchestrator | 2026-02-17 04:08:09.605271 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-17 04:08:09.605282 | orchestrator | Tuesday 17 February 2026 04:07:55 +0000 (0:00:03.757) 0:04:24.895 ****** 2026-02-17 04:08:09.605294 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:09.605306 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:09.605317 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:09.605374 | orchestrator | 2026-02-17 04:08:09.605389 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-17 04:08:09.605403 | orchestrator | Tuesday 17 February 2026 04:07:55 +0000 (0:00:00.317) 0:04:25.212 ****** 2026-02-17 04:08:09.605416 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:09.605428 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:09.605441 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:09.605454 | orchestrator | 2026-02-17 04:08:09.605467 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-17 04:08:09.605482 | orchestrator | Tuesday 17 February 2026 04:07:56 +0000 (0:00:00.485) 0:04:25.698 ****** 2026-02-17 04:08:09.605495 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:08:09.605508 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:08:09.605520 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:08:09.605601 | orchestrator | 2026-02-17 04:08:09.605617 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-17 04:08:09.605631 | orchestrator | Tuesday 17 February 2026 04:07:57 +0000 (0:00:01.230) 0:04:26.928 ****** 2026-02-17 04:08:09.605668 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-17 04:08:09.605684 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-17 04:08:09.605697 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-17 04:08:09.605710 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-17 04:08:09.605724 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-17 04:08:09.605735 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-17 04:08:09.605746 | orchestrator | 2026-02-17 04:08:09.605757 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-17 04:08:09.605768 | orchestrator | Tuesday 17 February 2026 04:08:00 +0000 (0:00:03.240) 0:04:30.169 ****** 2026-02-17 04:08:09.605779 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-17 04:08:09.605790 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-17 04:08:09.605801 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-17 04:08:09.605812 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-17 04:08:09.605823 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:08:09.605833 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-17 04:08:09.605844 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:08:09.605855 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-17 04:08:09.605866 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:08:09.605877 | orchestrator | 2026-02-17 04:08:09.605888 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-17 04:08:09.605899 | orchestrator | Tuesday 17 February 2026 04:08:03 +0000 (0:00:03.362) 0:04:33.531 ****** 2026-02-17 04:08:09.605910 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:09.605921 | orchestrator | 2026-02-17 04:08:09.605950 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-17 04:08:09.605963 | orchestrator | Tuesday 17 February 2026 04:08:04 +0000 (0:00:00.145) 0:04:33.676 ****** 2026-02-17 04:08:09.605974 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:09.605985 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:09.605996 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:09.606007 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:09.606078 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:09.606090 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:09.606101 | orchestrator | 2026-02-17 04:08:09.606113 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-17 04:08:09.606124 | orchestrator | Tuesday 17 February 2026 04:08:04 +0000 (0:00:00.825) 0:04:34.502 ****** 2026-02-17 04:08:09.606135 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:08:09.606146 | orchestrator | 2026-02-17 04:08:09.606157 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-17 04:08:09.606168 | orchestrator | Tuesday 17 February 2026 04:08:05 +0000 (0:00:00.727) 0:04:35.230 ****** 2026-02-17 04:08:09.606186 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:09.606198 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:09.606209 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:09.606219 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:09.606230 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:09.606241 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:09.606252 | orchestrator | 2026-02-17 04:08:09.606263 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-17 04:08:09.606274 | orchestrator | Tuesday 17 February 2026 04:08:06 +0000 (0:00:00.809) 0:04:36.040 ****** 2026-02-17 04:08:09.606298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:08:09.606314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:08:09.606326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:08:09.606390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:14.266990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:14.267001 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:14.267013 | orchestrator | 2026-02-17 04:08:14.267027 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-17 04:08:14.267039 | orchestrator | Tuesday 17 February 2026 04:08:09 +0000 (0:00:03.359) 0:04:39.399 ****** 2026-02-17 04:08:14.267057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:08:16.269484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:08:16.269673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:08:16.269694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:08:16.269706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:08:16.269718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:08:16.269747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:16.269774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:16.269786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:16.269798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:16.269810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:16.269821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:16.269841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:33.775645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:33.775755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:33.775771 | orchestrator | 2026-02-17 04:08:33.775784 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-17 04:08:33.775796 | orchestrator | Tuesday 17 February 2026 04:08:16 +0000 (0:00:06.524) 0:04:45.924 ****** 2026-02-17 04:08:33.775806 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:33.775817 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:33.775827 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:33.775837 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:33.775847 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:33.775856 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:33.775866 | orchestrator | 2026-02-17 04:08:33.775876 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-17 04:08:33.775886 | orchestrator | Tuesday 17 February 2026 04:08:17 +0000 (0:00:01.123) 0:04:47.048 ****** 2026-02-17 04:08:33.775896 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-17 04:08:33.775907 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-17 04:08:33.775917 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-17 04:08:33.775927 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-17 04:08:33.775936 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-17 04:08:33.775946 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-17 04:08:33.775957 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:33.775968 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-17 04:08:33.775977 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-17 04:08:33.775987 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:33.775997 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-17 04:08:33.776007 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:33.776017 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-17 04:08:33.776027 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-17 04:08:33.776057 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-17 04:08:33.776067 | orchestrator | 2026-02-17 04:08:33.776078 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-17 04:08:33.776088 | orchestrator | Tuesday 17 February 2026 04:08:20 +0000 (0:00:03.581) 0:04:50.629 ****** 2026-02-17 04:08:33.776097 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:33.776107 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:33.776117 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:33.776127 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:33.776136 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:33.776148 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:33.776159 | orchestrator | 2026-02-17 04:08:33.776170 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-17 04:08:33.776182 | orchestrator | Tuesday 17 February 2026 04:08:21 +0000 (0:00:00.596) 0:04:51.226 ****** 2026-02-17 04:08:33.776193 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-17 04:08:33.776205 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-17 04:08:33.776217 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-17 04:08:33.776228 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-17 04:08:33.776254 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-17 04:08:33.776266 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-17 04:08:33.776283 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-17 04:08:33.776295 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-17 04:08:33.776306 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-17 04:08:33.776318 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-17 04:08:33.776329 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:33.776340 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-17 04:08:33.776351 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:33.776363 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-17 04:08:33.776374 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:33.776430 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-17 04:08:33.776442 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-17 04:08:33.776453 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-17 04:08:33.776464 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-17 04:08:33.776475 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-17 04:08:33.776487 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-17 04:08:33.776499 | orchestrator | 2026-02-17 04:08:33.776509 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-17 04:08:33.776519 | orchestrator | Tuesday 17 February 2026 04:08:26 +0000 (0:00:05.039) 0:04:56.265 ****** 2026-02-17 04:08:33.776537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 04:08:33.776548 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 04:08:33.776557 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 04:08:33.776588 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-17 04:08:33.776599 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-17 04:08:33.776609 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-17 04:08:33.776618 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-17 04:08:33.776628 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-17 04:08:33.776637 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-17 04:08:33.776647 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 04:08:33.776657 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 04:08:33.776666 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 04:08:33.776676 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-17 04:08:33.776685 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-17 04:08:33.776695 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:33.776705 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:33.776715 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-17 04:08:33.776725 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:33.776734 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-17 04:08:33.776744 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-17 04:08:33.776754 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-17 04:08:33.776763 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-17 04:08:33.776773 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-17 04:08:33.776783 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-17 04:08:33.776793 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-17 04:08:33.776810 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-17 04:08:38.473455 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-17 04:08:38.473554 | orchestrator | 2026-02-17 04:08:38.473640 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-17 04:08:38.473656 | orchestrator | Tuesday 17 February 2026 04:08:33 +0000 (0:00:07.142) 0:05:03.408 ****** 2026-02-17 04:08:38.473668 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:38.473680 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:38.473691 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:38.473702 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:38.473713 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:38.473724 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:38.473735 | orchestrator | 2026-02-17 04:08:38.473746 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-17 04:08:38.473757 | orchestrator | Tuesday 17 February 2026 04:08:34 +0000 (0:00:00.802) 0:05:04.210 ****** 2026-02-17 04:08:38.473768 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:38.473804 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:38.473846 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:38.473858 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:38.473869 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:38.473879 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:38.473890 | orchestrator | 2026-02-17 04:08:38.473901 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-17 04:08:38.473912 | orchestrator | Tuesday 17 February 2026 04:08:35 +0000 (0:00:00.619) 0:05:04.830 ****** 2026-02-17 04:08:38.473923 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:38.473934 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:08:38.473945 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:38.473956 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:38.473967 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:08:38.473979 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:08:38.473991 | orchestrator | 2026-02-17 04:08:38.474004 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-17 04:08:38.474070 | orchestrator | Tuesday 17 February 2026 04:08:37 +0000 (0:00:02.206) 0:05:07.037 ****** 2026-02-17 04:08:38.474087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:08:38.474105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:08:38.474121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:08:38.474135 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:38.474176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:08:38.474200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:08:38.474212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:08:38.474224 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:38.474236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-17 04:08:38.474247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-17 04:08:38.474267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-17 04:08:41.967491 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:41.967645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:08:41.967664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:08:41.967675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:08:41.967686 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:41.967696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:08:41.967707 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:41.967717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-17 04:08:41.967728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:08:41.967760 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:41.967772 | orchestrator | 2026-02-17 04:08:41.967782 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-17 04:08:41.967794 | orchestrator | Tuesday 17 February 2026 04:08:38 +0000 (0:00:01.415) 0:05:08.452 ****** 2026-02-17 04:08:41.967804 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-17 04:08:41.967831 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-17 04:08:41.967847 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:41.967858 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-17 04:08:41.967868 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-17 04:08:41.967878 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:41.967887 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-17 04:08:41.967897 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-17 04:08:41.967907 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:41.967917 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-17 04:08:41.967927 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-17 04:08:41.967936 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:41.967946 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-17 04:08:41.967956 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-17 04:08:41.967966 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:41.967976 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-17 04:08:41.967986 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-17 04:08:41.967995 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:41.968005 | orchestrator | 2026-02-17 04:08:41.968017 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-17 04:08:41.968029 | orchestrator | Tuesday 17 February 2026 04:08:39 +0000 (0:00:00.852) 0:05:09.304 ****** 2026-02-17 04:08:41.968042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:08:41.968056 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:08:41.968075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-17 04:08:41.968099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:44.029918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-17 04:08:44.030296 | orchestrator | 2026-02-17 04:08:44.030309 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-17 04:08:44.030322 | orchestrator | Tuesday 17 February 2026 04:08:42 +0000 (0:00:02.531) 0:05:11.836 ****** 2026-02-17 04:08:44.030333 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:08:44.030346 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:08:44.030357 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:08:44.030368 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:08:44.030379 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:08:44.030390 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:08:44.030401 | orchestrator | 2026-02-17 04:08:44.030413 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-17 04:08:44.030424 | orchestrator | Tuesday 17 February 2026 04:08:42 +0000 (0:00:00.804) 0:05:12.641 ****** 2026-02-17 04:08:44.030435 | orchestrator | 2026-02-17 04:08:44.030446 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-17 04:08:44.030457 | orchestrator | Tuesday 17 February 2026 04:08:43 +0000 (0:00:00.141) 0:05:12.783 ****** 2026-02-17 04:08:44.030468 | orchestrator | 2026-02-17 04:08:44.030480 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-17 04:08:44.030496 | orchestrator | Tuesday 17 February 2026 04:08:43 +0000 (0:00:00.139) 0:05:12.922 ****** 2026-02-17 04:08:44.030508 | orchestrator | 2026-02-17 04:08:44.030519 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-17 04:08:44.030536 | orchestrator | Tuesday 17 February 2026 04:08:43 +0000 (0:00:00.143) 0:05:13.065 ****** 2026-02-17 04:11:49.647555 | orchestrator | 2026-02-17 04:11:49.647686 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-17 04:11:49.647715 | orchestrator | Tuesday 17 February 2026 04:08:43 +0000 (0:00:00.149) 0:05:13.215 ****** 2026-02-17 04:11:49.647736 | orchestrator | 2026-02-17 04:11:49.647754 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-17 04:11:49.647773 | orchestrator | Tuesday 17 February 2026 04:08:43 +0000 (0:00:00.313) 0:05:13.528 ****** 2026-02-17 04:11:49.647790 | orchestrator | 2026-02-17 04:11:49.647808 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-17 04:11:49.647827 | orchestrator | Tuesday 17 February 2026 04:08:44 +0000 (0:00:00.139) 0:05:13.668 ****** 2026-02-17 04:11:49.647848 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:11:49.647868 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:11:49.647932 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:11:49.647944 | orchestrator | 2026-02-17 04:11:49.647955 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-17 04:11:49.647967 | orchestrator | Tuesday 17 February 2026 04:08:55 +0000 (0:00:11.893) 0:05:25.562 ****** 2026-02-17 04:11:49.647978 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:11:49.647989 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:11:49.648021 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:11:49.648047 | orchestrator | 2026-02-17 04:11:49.648067 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-17 04:11:49.648129 | orchestrator | Tuesday 17 February 2026 04:09:09 +0000 (0:00:13.924) 0:05:39.486 ****** 2026-02-17 04:11:49.648152 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:11:49.648172 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:11:49.648191 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:11:49.648210 | orchestrator | 2026-02-17 04:11:49.648231 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-17 04:11:49.648248 | orchestrator | Tuesday 17 February 2026 04:09:36 +0000 (0:00:26.188) 0:06:05.675 ****** 2026-02-17 04:11:49.648264 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:11:49.648283 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:11:49.648305 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:11:49.648326 | orchestrator | 2026-02-17 04:11:49.648345 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-17 04:11:49.648364 | orchestrator | Tuesday 17 February 2026 04:10:13 +0000 (0:00:36.998) 0:06:42.674 ****** 2026-02-17 04:11:49.648384 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-17 04:11:49.648406 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-17 04:11:49.648425 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-17 04:11:49.648445 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:11:49.648464 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:11:49.648484 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:11:49.648502 | orchestrator | 2026-02-17 04:11:49.648522 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-17 04:11:49.648535 | orchestrator | Tuesday 17 February 2026 04:10:19 +0000 (0:00:06.227) 0:06:48.901 ****** 2026-02-17 04:11:49.648546 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:11:49.648558 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:11:49.648569 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:11:49.648580 | orchestrator | 2026-02-17 04:11:49.648590 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-17 04:11:49.648602 | orchestrator | Tuesday 17 February 2026 04:10:20 +0000 (0:00:00.784) 0:06:49.686 ****** 2026-02-17 04:11:49.648670 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:11:49.648682 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:11:49.648693 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:11:49.648704 | orchestrator | 2026-02-17 04:11:49.648715 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-17 04:11:49.648743 | orchestrator | Tuesday 17 February 2026 04:10:45 +0000 (0:00:25.287) 0:07:14.973 ****** 2026-02-17 04:11:49.648754 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:11:49.648765 | orchestrator | 2026-02-17 04:11:49.648776 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-17 04:11:49.648787 | orchestrator | Tuesday 17 February 2026 04:10:45 +0000 (0:00:00.137) 0:07:15.111 ****** 2026-02-17 04:11:49.648798 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:11:49.648809 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:11:49.648820 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:11:49.648831 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:11:49.648842 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:11:49.648853 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-17 04:11:49.648866 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 04:11:49.648907 | orchestrator | 2026-02-17 04:11:49.648919 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-17 04:11:49.648930 | orchestrator | Tuesday 17 February 2026 04:11:07 +0000 (0:00:22.092) 0:07:37.203 ****** 2026-02-17 04:11:49.648941 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:11:49.648952 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:11:49.648977 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:11:49.648988 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:11:49.648998 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:11:49.649009 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:11:49.649020 | orchestrator | 2026-02-17 04:11:49.649031 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-17 04:11:49.649057 | orchestrator | Tuesday 17 February 2026 04:11:15 +0000 (0:00:08.417) 0:07:45.620 ****** 2026-02-17 04:11:49.649069 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:11:49.649079 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:11:49.649091 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:11:49.649102 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:11:49.649113 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:11:49.649149 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-17 04:11:49.649161 | orchestrator | 2026-02-17 04:11:49.649172 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-17 04:11:49.649183 | orchestrator | Tuesday 17 February 2026 04:11:19 +0000 (0:00:03.785) 0:07:49.405 ****** 2026-02-17 04:11:49.649193 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 04:11:49.649205 | orchestrator | 2026-02-17 04:11:49.649215 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-17 04:11:49.649226 | orchestrator | Tuesday 17 February 2026 04:11:32 +0000 (0:00:12.685) 0:08:02.091 ****** 2026-02-17 04:11:49.649237 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 04:11:49.649248 | orchestrator | 2026-02-17 04:11:49.649258 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-17 04:11:49.649269 | orchestrator | Tuesday 17 February 2026 04:11:33 +0000 (0:00:01.472) 0:08:03.564 ****** 2026-02-17 04:11:49.649280 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:11:49.649291 | orchestrator | 2026-02-17 04:11:49.649302 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-17 04:11:49.649313 | orchestrator | Tuesday 17 February 2026 04:11:35 +0000 (0:00:01.684) 0:08:05.249 ****** 2026-02-17 04:11:49.649323 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 04:11:49.649334 | orchestrator | 2026-02-17 04:11:49.649345 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-17 04:11:49.649356 | orchestrator | Tuesday 17 February 2026 04:11:45 +0000 (0:00:10.120) 0:08:15.369 ****** 2026-02-17 04:11:49.649367 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:11:49.649379 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:11:49.649390 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:11:49.649408 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:11:49.649430 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:11:49.649448 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:11:49.649466 | orchestrator | 2026-02-17 04:11:49.649487 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-17 04:11:49.649507 | orchestrator | 2026-02-17 04:11:49.649527 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-17 04:11:49.649548 | orchestrator | Tuesday 17 February 2026 04:11:47 +0000 (0:00:01.699) 0:08:17.069 ****** 2026-02-17 04:11:49.649568 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:11:49.649588 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:11:49.649609 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:11:49.649631 | orchestrator | 2026-02-17 04:11:49.649653 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-17 04:11:49.649676 | orchestrator | 2026-02-17 04:11:49.649698 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-17 04:11:49.649718 | orchestrator | Tuesday 17 February 2026 04:11:48 +0000 (0:00:00.938) 0:08:18.007 ****** 2026-02-17 04:11:49.649736 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:11:49.649757 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:11:49.649778 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:11:49.649813 | orchestrator | 2026-02-17 04:11:49.649833 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-17 04:11:49.649852 | orchestrator | 2026-02-17 04:11:49.649899 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-17 04:11:49.649921 | orchestrator | Tuesday 17 February 2026 04:11:49 +0000 (0:00:00.724) 0:08:18.732 ****** 2026-02-17 04:11:49.649940 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-17 04:11:49.649958 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-17 04:11:49.649976 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-17 04:11:49.649994 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-17 04:11:49.650013 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-17 04:11:49.650100 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-17 04:11:49.650113 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:11:49.650124 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-17 04:11:49.650134 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-17 04:11:49.650145 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-17 04:11:49.650156 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-17 04:11:49.650167 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-17 04:11:49.650178 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-17 04:11:49.650189 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:11:49.650200 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-17 04:11:49.650210 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-17 04:11:49.650221 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-17 04:11:49.650232 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-17 04:11:49.650243 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-17 04:11:49.650254 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-17 04:11:49.650265 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:11:49.650276 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-17 04:11:49.650288 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-17 04:11:49.650307 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-17 04:11:49.650325 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-17 04:11:49.650353 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-17 04:11:49.650371 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-17 04:11:49.650388 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:11:49.650405 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-17 04:11:49.650446 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-17 04:11:52.735633 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-17 04:11:52.735731 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-17 04:11:52.735744 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-17 04:11:52.735754 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-17 04:11:52.735765 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:11:52.735775 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-17 04:11:52.735784 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-17 04:11:52.735794 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-17 04:11:52.735803 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-17 04:11:52.735812 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-17 04:11:52.735820 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-17 04:11:52.735853 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:11:52.735862 | orchestrator | 2026-02-17 04:11:52.735871 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-17 04:11:52.735940 | orchestrator | 2026-02-17 04:11:52.735951 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-17 04:11:52.735960 | orchestrator | Tuesday 17 February 2026 04:11:50 +0000 (0:00:01.325) 0:08:20.057 ****** 2026-02-17 04:11:52.735969 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-17 04:11:52.735978 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-17 04:11:52.735987 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:11:52.735996 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-17 04:11:52.736005 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-17 04:11:52.736013 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:11:52.736022 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-17 04:11:52.736043 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-17 04:11:52.736069 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:11:52.736084 | orchestrator | 2026-02-17 04:11:52.736098 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-17 04:11:52.736112 | orchestrator | 2026-02-17 04:11:52.736126 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-17 04:11:52.736141 | orchestrator | Tuesday 17 February 2026 04:11:50 +0000 (0:00:00.560) 0:08:20.617 ****** 2026-02-17 04:11:52.736156 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:11:52.736171 | orchestrator | 2026-02-17 04:11:52.736186 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-17 04:11:52.736201 | orchestrator | 2026-02-17 04:11:52.736217 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-17 04:11:52.736228 | orchestrator | Tuesday 17 February 2026 04:11:51 +0000 (0:00:00.912) 0:08:21.530 ****** 2026-02-17 04:11:52.736238 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:11:52.736249 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:11:52.736259 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:11:52.736268 | orchestrator | 2026-02-17 04:11:52.736279 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:11:52.736290 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:11:52.736304 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-17 04:11:52.736314 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-17 04:11:52.736324 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-17 04:11:52.736334 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-17 04:11:52.736344 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-17 04:11:52.736354 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-17 04:11:52.736364 | orchestrator | 2026-02-17 04:11:52.736375 | orchestrator | 2026-02-17 04:11:52.736385 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:11:52.736395 | orchestrator | Tuesday 17 February 2026 04:11:52 +0000 (0:00:00.474) 0:08:22.005 ****** 2026-02-17 04:11:52.736405 | orchestrator | =============================================================================== 2026-02-17 04:11:52.736429 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.00s 2026-02-17 04:11:52.736439 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.86s 2026-02-17 04:11:52.736450 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.19s 2026-02-17 04:11:52.736474 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.29s 2026-02-17 04:11:52.736485 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.09s 2026-02-17 04:11:52.736495 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.24s 2026-02-17 04:11:52.736524 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.03s 2026-02-17 04:11:52.736534 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.50s 2026-02-17 04:11:52.736545 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.47s 2026-02-17 04:11:52.736554 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.92s 2026-02-17 04:11:52.736563 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.69s 2026-02-17 04:11:52.736571 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.89s 2026-02-17 04:11:52.736580 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.88s 2026-02-17 04:11:52.736589 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.82s 2026-02-17 04:11:52.736597 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.94s 2026-02-17 04:11:52.736606 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.40s 2026-02-17 04:11:52.736614 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.12s 2026-02-17 04:11:52.736623 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.42s 2026-02-17 04:11:52.736632 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.25s 2026-02-17 04:11:52.736640 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.14s 2026-02-17 04:11:56.185029 | orchestrator | 2026-02-17 04:11:56 | INFO  | Task 07b9e2f7-d8b3-4e30-a01a-e7bcc7cb2260 (horizon) was prepared for execution. 2026-02-17 04:11:56.185132 | orchestrator | 2026-02-17 04:11:56 | INFO  | It takes a moment until task 07b9e2f7-d8b3-4e30-a01a-e7bcc7cb2260 (horizon) has been started and output is visible here. 2026-02-17 04:12:03.267511 | orchestrator | 2026-02-17 04:12:03.267624 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:12:03.267640 | orchestrator | 2026-02-17 04:12:03.267652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:12:03.267663 | orchestrator | Tuesday 17 February 2026 04:12:00 +0000 (0:00:00.257) 0:00:00.257 ****** 2026-02-17 04:12:03.267675 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:03.267687 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:03.267698 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:03.267709 | orchestrator | 2026-02-17 04:12:03.267721 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:12:03.267732 | orchestrator | Tuesday 17 February 2026 04:12:00 +0000 (0:00:00.306) 0:00:00.564 ****** 2026-02-17 04:12:03.267744 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-17 04:12:03.267756 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-17 04:12:03.267768 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-17 04:12:03.267779 | orchestrator | 2026-02-17 04:12:03.267790 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-17 04:12:03.267801 | orchestrator | 2026-02-17 04:12:03.267812 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-17 04:12:03.267823 | orchestrator | Tuesday 17 February 2026 04:12:01 +0000 (0:00:00.446) 0:00:01.010 ****** 2026-02-17 04:12:03.267860 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:12:03.267873 | orchestrator | 2026-02-17 04:12:03.267884 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-17 04:12:03.267952 | orchestrator | Tuesday 17 February 2026 04:12:01 +0000 (0:00:00.548) 0:00:01.559 ****** 2026-02-17 04:12:03.268029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:12:03.268073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:12:03.268108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:12:03.268123 | orchestrator | 2026-02-17 04:12:03.268138 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-17 04:12:03.268157 | orchestrator | Tuesday 17 February 2026 04:12:02 +0000 (0:00:01.131) 0:00:02.690 ****** 2026-02-17 04:12:03.268176 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:03.268195 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:03.268213 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:03.268230 | orchestrator | 2026-02-17 04:12:03.268248 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-17 04:12:03.268267 | orchestrator | Tuesday 17 February 2026 04:12:03 +0000 (0:00:00.448) 0:00:03.138 ****** 2026-02-17 04:12:03.268295 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-17 04:12:09.204552 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-17 04:12:09.204665 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-17 04:12:09.204680 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-17 04:12:09.204692 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-17 04:12:09.204729 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-17 04:12:09.204740 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-17 04:12:09.204752 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-17 04:12:09.204763 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-17 04:12:09.204773 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-17 04:12:09.204784 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-17 04:12:09.204795 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-17 04:12:09.204806 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-17 04:12:09.204817 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-17 04:12:09.204827 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-17 04:12:09.204838 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-17 04:12:09.204849 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-17 04:12:09.204860 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-17 04:12:09.204871 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-17 04:12:09.204881 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-17 04:12:09.204892 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-17 04:12:09.204950 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-17 04:12:09.204964 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-17 04:12:09.204981 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-17 04:12:09.204996 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-17 04:12:09.205009 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-17 04:12:09.205037 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-17 04:12:09.205049 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-17 04:12:09.205060 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-17 04:12:09.205072 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-17 04:12:09.205085 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-17 04:12:09.205097 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-17 04:12:09.205110 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-17 04:12:09.205124 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-17 04:12:09.205146 | orchestrator | 2026-02-17 04:12:09.205161 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:09.205174 | orchestrator | Tuesday 17 February 2026 04:12:03 +0000 (0:00:00.727) 0:00:03.865 ****** 2026-02-17 04:12:09.205187 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:09.205200 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:09.205213 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:09.205226 | orchestrator | 2026-02-17 04:12:09.205239 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:09.205251 | orchestrator | Tuesday 17 February 2026 04:12:04 +0000 (0:00:00.312) 0:00:04.178 ****** 2026-02-17 04:12:09.205264 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.205278 | orchestrator | 2026-02-17 04:12:09.205310 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:09.205324 | orchestrator | Tuesday 17 February 2026 04:12:04 +0000 (0:00:00.312) 0:00:04.491 ****** 2026-02-17 04:12:09.205336 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.205347 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:09.205358 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:09.205369 | orchestrator | 2026-02-17 04:12:09.205380 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:09.205391 | orchestrator | Tuesday 17 February 2026 04:12:04 +0000 (0:00:00.293) 0:00:04.785 ****** 2026-02-17 04:12:09.205402 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:09.205413 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:09.205424 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:09.205434 | orchestrator | 2026-02-17 04:12:09.205446 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:09.205456 | orchestrator | Tuesday 17 February 2026 04:12:05 +0000 (0:00:00.302) 0:00:05.087 ****** 2026-02-17 04:12:09.205467 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.205478 | orchestrator | 2026-02-17 04:12:09.205490 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:09.205501 | orchestrator | Tuesday 17 February 2026 04:12:05 +0000 (0:00:00.137) 0:00:05.225 ****** 2026-02-17 04:12:09.205512 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.205524 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:09.205535 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:09.205545 | orchestrator | 2026-02-17 04:12:09.205556 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:09.205567 | orchestrator | Tuesday 17 February 2026 04:12:05 +0000 (0:00:00.304) 0:00:05.529 ****** 2026-02-17 04:12:09.205578 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:09.205589 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:09.205600 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:09.205611 | orchestrator | 2026-02-17 04:12:09.205622 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:09.205633 | orchestrator | Tuesday 17 February 2026 04:12:06 +0000 (0:00:00.495) 0:00:06.025 ****** 2026-02-17 04:12:09.205644 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.205655 | orchestrator | 2026-02-17 04:12:09.205666 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:09.205676 | orchestrator | Tuesday 17 February 2026 04:12:06 +0000 (0:00:00.150) 0:00:06.176 ****** 2026-02-17 04:12:09.205687 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.205699 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:09.205710 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:09.205721 | orchestrator | 2026-02-17 04:12:09.205732 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:09.205743 | orchestrator | Tuesday 17 February 2026 04:12:06 +0000 (0:00:00.314) 0:00:06.491 ****** 2026-02-17 04:12:09.205754 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:09.205765 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:09.205783 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:09.205795 | orchestrator | 2026-02-17 04:12:09.205806 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:09.205816 | orchestrator | Tuesday 17 February 2026 04:12:06 +0000 (0:00:00.333) 0:00:06.825 ****** 2026-02-17 04:12:09.205827 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.205838 | orchestrator | 2026-02-17 04:12:09.205849 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:09.205860 | orchestrator | Tuesday 17 February 2026 04:12:06 +0000 (0:00:00.133) 0:00:06.958 ****** 2026-02-17 04:12:09.205871 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.205887 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:09.205899 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:09.205937 | orchestrator | 2026-02-17 04:12:09.205949 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:09.205960 | orchestrator | Tuesday 17 February 2026 04:12:07 +0000 (0:00:00.521) 0:00:07.479 ****** 2026-02-17 04:12:09.205970 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:09.205981 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:09.205997 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:09.206074 | orchestrator | 2026-02-17 04:12:09.206088 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:09.206099 | orchestrator | Tuesday 17 February 2026 04:12:07 +0000 (0:00:00.324) 0:00:07.804 ****** 2026-02-17 04:12:09.206110 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.206121 | orchestrator | 2026-02-17 04:12:09.206132 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:09.206143 | orchestrator | Tuesday 17 February 2026 04:12:07 +0000 (0:00:00.138) 0:00:07.943 ****** 2026-02-17 04:12:09.206154 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.206165 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:09.206176 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:09.206187 | orchestrator | 2026-02-17 04:12:09.206197 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:09.206208 | orchestrator | Tuesday 17 February 2026 04:12:08 +0000 (0:00:00.291) 0:00:08.235 ****** 2026-02-17 04:12:09.206219 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:09.206230 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:09.206241 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:09.206252 | orchestrator | 2026-02-17 04:12:09.206263 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:09.206274 | orchestrator | Tuesday 17 February 2026 04:12:08 +0000 (0:00:00.312) 0:00:08.548 ****** 2026-02-17 04:12:09.206285 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.206296 | orchestrator | 2026-02-17 04:12:09.206307 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:09.206318 | orchestrator | Tuesday 17 February 2026 04:12:08 +0000 (0:00:00.300) 0:00:08.848 ****** 2026-02-17 04:12:09.206329 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:09.206340 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:09.206351 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:09.206362 | orchestrator | 2026-02-17 04:12:09.206373 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:09.206392 | orchestrator | Tuesday 17 February 2026 04:12:09 +0000 (0:00:00.331) 0:00:09.180 ****** 2026-02-17 04:12:22.880780 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:22.880893 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:22.880906 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:22.880916 | orchestrator | 2026-02-17 04:12:22.880968 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:22.880980 | orchestrator | Tuesday 17 February 2026 04:12:09 +0000 (0:00:00.324) 0:00:09.505 ****** 2026-02-17 04:12:22.880990 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881002 | orchestrator | 2026-02-17 04:12:22.881012 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:22.881047 | orchestrator | Tuesday 17 February 2026 04:12:09 +0000 (0:00:00.119) 0:00:09.624 ****** 2026-02-17 04:12:22.881058 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881068 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:22.881078 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:22.881088 | orchestrator | 2026-02-17 04:12:22.881098 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:22.881108 | orchestrator | Tuesday 17 February 2026 04:12:09 +0000 (0:00:00.278) 0:00:09.903 ****** 2026-02-17 04:12:22.881118 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:22.881127 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:22.881137 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:22.881147 | orchestrator | 2026-02-17 04:12:22.881157 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:22.881167 | orchestrator | Tuesday 17 February 2026 04:12:10 +0000 (0:00:00.496) 0:00:10.399 ****** 2026-02-17 04:12:22.881176 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881186 | orchestrator | 2026-02-17 04:12:22.881196 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:22.881206 | orchestrator | Tuesday 17 February 2026 04:12:10 +0000 (0:00:00.149) 0:00:10.548 ****** 2026-02-17 04:12:22.881216 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881225 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:22.881235 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:22.881245 | orchestrator | 2026-02-17 04:12:22.881255 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:22.881265 | orchestrator | Tuesday 17 February 2026 04:12:10 +0000 (0:00:00.309) 0:00:10.858 ****** 2026-02-17 04:12:22.881274 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:22.881284 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:22.881294 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:22.881303 | orchestrator | 2026-02-17 04:12:22.881325 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:22.881337 | orchestrator | Tuesday 17 February 2026 04:12:11 +0000 (0:00:00.332) 0:00:11.190 ****** 2026-02-17 04:12:22.881349 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881360 | orchestrator | 2026-02-17 04:12:22.881371 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:22.881381 | orchestrator | Tuesday 17 February 2026 04:12:11 +0000 (0:00:00.125) 0:00:11.315 ****** 2026-02-17 04:12:22.881392 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881403 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:22.881414 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:22.881426 | orchestrator | 2026-02-17 04:12:22.881437 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-17 04:12:22.881447 | orchestrator | Tuesday 17 February 2026 04:12:11 +0000 (0:00:00.496) 0:00:11.812 ****** 2026-02-17 04:12:22.881458 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:12:22.881469 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:12:22.881480 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:12:22.881491 | orchestrator | 2026-02-17 04:12:22.881502 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-17 04:12:22.881527 | orchestrator | Tuesday 17 February 2026 04:12:12 +0000 (0:00:00.327) 0:00:12.139 ****** 2026-02-17 04:12:22.881538 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881549 | orchestrator | 2026-02-17 04:12:22.881559 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-17 04:12:22.881571 | orchestrator | Tuesday 17 February 2026 04:12:12 +0000 (0:00:00.129) 0:00:12.268 ****** 2026-02-17 04:12:22.881582 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881593 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:22.881604 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:22.881615 | orchestrator | 2026-02-17 04:12:22.881669 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-17 04:12:22.881681 | orchestrator | Tuesday 17 February 2026 04:12:12 +0000 (0:00:00.320) 0:00:12.589 ****** 2026-02-17 04:12:22.881700 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:12:22.881710 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:12:22.881720 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:12:22.881730 | orchestrator | 2026-02-17 04:12:22.881740 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-17 04:12:22.881749 | orchestrator | Tuesday 17 February 2026 04:12:14 +0000 (0:00:01.791) 0:00:14.380 ****** 2026-02-17 04:12:22.881759 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-17 04:12:22.881770 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-17 04:12:22.881779 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-17 04:12:22.881789 | orchestrator | 2026-02-17 04:12:22.881799 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-17 04:12:22.881808 | orchestrator | Tuesday 17 February 2026 04:12:16 +0000 (0:00:01.807) 0:00:16.187 ****** 2026-02-17 04:12:22.881818 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-17 04:12:22.881829 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-17 04:12:22.881839 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-17 04:12:22.881849 | orchestrator | 2026-02-17 04:12:22.881858 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-17 04:12:22.881885 | orchestrator | Tuesday 17 February 2026 04:12:18 +0000 (0:00:01.811) 0:00:17.999 ****** 2026-02-17 04:12:22.881896 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-17 04:12:22.881906 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-17 04:12:22.881915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-17 04:12:22.881944 | orchestrator | 2026-02-17 04:12:22.881954 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-17 04:12:22.881964 | orchestrator | Tuesday 17 February 2026 04:12:19 +0000 (0:00:01.480) 0:00:19.479 ****** 2026-02-17 04:12:22.881974 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.881984 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:22.881994 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:22.882004 | orchestrator | 2026-02-17 04:12:22.882069 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-17 04:12:22.882080 | orchestrator | Tuesday 17 February 2026 04:12:20 +0000 (0:00:00.511) 0:00:19.991 ****** 2026-02-17 04:12:22.882090 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:22.882100 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:22.882110 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:22.882120 | orchestrator | 2026-02-17 04:12:22.882129 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-17 04:12:22.882139 | orchestrator | Tuesday 17 February 2026 04:12:20 +0000 (0:00:00.308) 0:00:20.299 ****** 2026-02-17 04:12:22.882149 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:12:22.882159 | orchestrator | 2026-02-17 04:12:22.882169 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-17 04:12:22.882179 | orchestrator | Tuesday 17 February 2026 04:12:20 +0000 (0:00:00.593) 0:00:20.893 ****** 2026-02-17 04:12:22.882201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:12:22.882235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:12:23.623200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:12:23.623328 | orchestrator | 2026-02-17 04:12:23.623344 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-17 04:12:23.623357 | orchestrator | Tuesday 17 February 2026 04:12:22 +0000 (0:00:01.953) 0:00:22.847 ****** 2026-02-17 04:12:23.623388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 04:12:23.623409 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:23.623428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 04:12:23.623440 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:23.623458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 04:12:26.160586 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:26.160692 | orchestrator | 2026-02-17 04:12:26.160705 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-17 04:12:26.160712 | orchestrator | Tuesday 17 February 2026 04:12:23 +0000 (0:00:00.751) 0:00:23.599 ****** 2026-02-17 04:12:26.160724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 04:12:26.160734 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:12:26.160760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 04:12:26.160792 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:12:26.160879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 04:12:26.160900 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:12:26.160906 | orchestrator | 2026-02-17 04:12:26.160914 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-17 04:12:26.160987 | orchestrator | Tuesday 17 February 2026 04:12:24 +0000 (0:00:00.876) 0:00:24.475 ****** 2026-02-17 04:12:26.161011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:13:10.647469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:13:10.647708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 04:13:10.647731 | orchestrator | 2026-02-17 04:13:10.647745 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-17 04:13:10.647758 | orchestrator | Tuesday 17 February 2026 04:12:26 +0000 (0:00:01.661) 0:00:26.137 ****** 2026-02-17 04:13:10.647768 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:13:10.647781 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:13:10.647791 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:13:10.647802 | orchestrator | 2026-02-17 04:13:10.647814 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-17 04:13:10.647825 | orchestrator | Tuesday 17 February 2026 04:12:26 +0000 (0:00:00.336) 0:00:26.474 ****** 2026-02-17 04:13:10.647836 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:13:10.647847 | orchestrator | 2026-02-17 04:13:10.647858 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-17 04:13:10.647869 | orchestrator | Tuesday 17 February 2026 04:12:27 +0000 (0:00:00.550) 0:00:27.024 ****** 2026-02-17 04:13:10.647880 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:13:10.647891 | orchestrator | 2026-02-17 04:13:10.647901 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-17 04:13:10.647912 | orchestrator | Tuesday 17 February 2026 04:12:29 +0000 (0:00:02.111) 0:00:29.136 ****** 2026-02-17 04:13:10.647932 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:13:10.647944 | orchestrator | 2026-02-17 04:13:10.647955 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-17 04:13:10.647966 | orchestrator | Tuesday 17 February 2026 04:12:31 +0000 (0:00:02.567) 0:00:31.703 ****** 2026-02-17 04:13:10.647976 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:13:10.648053 | orchestrator | 2026-02-17 04:13:10.648069 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-17 04:13:10.648083 | orchestrator | Tuesday 17 February 2026 04:12:47 +0000 (0:00:15.842) 0:00:47.546 ****** 2026-02-17 04:13:10.648095 | orchestrator | 2026-02-17 04:13:10.648107 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-17 04:13:10.648119 | orchestrator | Tuesday 17 February 2026 04:12:47 +0000 (0:00:00.070) 0:00:47.616 ****** 2026-02-17 04:13:10.648131 | orchestrator | 2026-02-17 04:13:10.648143 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-17 04:13:10.648155 | orchestrator | Tuesday 17 February 2026 04:12:47 +0000 (0:00:00.065) 0:00:47.682 ****** 2026-02-17 04:13:10.648167 | orchestrator | 2026-02-17 04:13:10.648179 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-17 04:13:10.648192 | orchestrator | Tuesday 17 February 2026 04:12:47 +0000 (0:00:00.072) 0:00:47.755 ****** 2026-02-17 04:13:10.648205 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:13:10.648217 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:13:10.648229 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:13:10.648242 | orchestrator | 2026-02-17 04:13:10.648254 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:13:10.648268 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-17 04:13:10.648282 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-17 04:13:10.648294 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-17 04:13:10.648307 | orchestrator | 2026-02-17 04:13:10.648319 | orchestrator | 2026-02-17 04:13:10.648331 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:13:10.648341 | orchestrator | Tuesday 17 February 2026 04:13:10 +0000 (0:00:22.847) 0:01:10.602 ****** 2026-02-17 04:13:10.648352 | orchestrator | =============================================================================== 2026-02-17 04:13:10.648363 | orchestrator | horizon : Restart horizon container ------------------------------------ 22.85s 2026-02-17 04:13:10.648379 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.84s 2026-02-17 04:13:10.648391 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.57s 2026-02-17 04:13:10.648401 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.11s 2026-02-17 04:13:10.648412 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.95s 2026-02-17 04:13:10.648423 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.81s 2026-02-17 04:13:10.648434 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.81s 2026-02-17 04:13:10.648444 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.79s 2026-02-17 04:13:10.648455 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.66s 2026-02-17 04:13:10.648466 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.48s 2026-02-17 04:13:10.648477 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.13s 2026-02-17 04:13:10.648487 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-02-17 04:13:10.648498 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.75s 2026-02-17 04:13:10.648524 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-02-17 04:13:11.010440 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2026-02-17 04:13:11.010568 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-02-17 04:13:11.010590 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-02-17 04:13:11.010611 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2026-02-17 04:13:11.010623 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.51s 2026-02-17 04:13:11.010634 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2026-02-17 04:13:13.312084 | orchestrator | 2026-02-17 04:13:13 | INFO  | Task 1a9c8fb1-1f24-464b-b4ad-9d12c4af2f86 (skyline) was prepared for execution. 2026-02-17 04:13:13.312180 | orchestrator | 2026-02-17 04:13:13 | INFO  | It takes a moment until task 1a9c8fb1-1f24-464b-b4ad-9d12c4af2f86 (skyline) has been started and output is visible here. 2026-02-17 04:13:43.393336 | orchestrator | 2026-02-17 04:13:43.393432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:13:43.393444 | orchestrator | 2026-02-17 04:13:43.393452 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:13:43.393460 | orchestrator | Tuesday 17 February 2026 04:13:17 +0000 (0:00:00.274) 0:00:00.274 ****** 2026-02-17 04:13:43.393467 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:13:43.393476 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:13:43.393484 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:13:43.393491 | orchestrator | 2026-02-17 04:13:43.393499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:13:43.393506 | orchestrator | Tuesday 17 February 2026 04:13:17 +0000 (0:00:00.306) 0:00:00.581 ****** 2026-02-17 04:13:43.393514 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-17 04:13:43.393522 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-17 04:13:43.393529 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-17 04:13:43.393537 | orchestrator | 2026-02-17 04:13:43.393544 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-17 04:13:43.393552 | orchestrator | 2026-02-17 04:13:43.393559 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-17 04:13:43.393567 | orchestrator | Tuesday 17 February 2026 04:13:18 +0000 (0:00:00.443) 0:00:01.024 ****** 2026-02-17 04:13:43.393575 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:13:43.393583 | orchestrator | 2026-02-17 04:13:43.393590 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-17 04:13:43.393598 | orchestrator | Tuesday 17 February 2026 04:13:18 +0000 (0:00:00.564) 0:00:01.588 ****** 2026-02-17 04:13:43.393605 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-17 04:13:43.393613 | orchestrator | 2026-02-17 04:13:43.393620 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-17 04:13:43.393628 | orchestrator | Tuesday 17 February 2026 04:13:22 +0000 (0:00:03.211) 0:00:04.800 ****** 2026-02-17 04:13:43.393636 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-17 04:13:43.393643 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-17 04:13:43.393651 | orchestrator | 2026-02-17 04:13:43.393658 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-17 04:13:43.393666 | orchestrator | Tuesday 17 February 2026 04:13:28 +0000 (0:00:06.200) 0:00:11.001 ****** 2026-02-17 04:13:43.393674 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:13:43.393683 | orchestrator | 2026-02-17 04:13:43.393690 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-17 04:13:43.393719 | orchestrator | Tuesday 17 February 2026 04:13:31 +0000 (0:00:03.150) 0:00:14.152 ****** 2026-02-17 04:13:43.393727 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:13:43.393735 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-17 04:13:43.393742 | orchestrator | 2026-02-17 04:13:43.393749 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-17 04:13:43.393756 | orchestrator | Tuesday 17 February 2026 04:13:35 +0000 (0:00:03.892) 0:00:18.044 ****** 2026-02-17 04:13:43.393778 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:13:43.393785 | orchestrator | 2026-02-17 04:13:43.393792 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-17 04:13:43.393800 | orchestrator | Tuesday 17 February 2026 04:13:38 +0000 (0:00:03.111) 0:00:21.156 ****** 2026-02-17 04:13:43.393807 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-17 04:13:43.393814 | orchestrator | 2026-02-17 04:13:43.393821 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-17 04:13:43.393829 | orchestrator | Tuesday 17 February 2026 04:13:42 +0000 (0:00:03.661) 0:00:24.817 ****** 2026-02-17 04:13:43.393839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:43.393864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:43.393873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:43.393891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:43.393902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:43.393917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:47.263310 | orchestrator | 2026-02-17 04:13:47.263416 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-17 04:13:47.263428 | orchestrator | Tuesday 17 February 2026 04:13:43 +0000 (0:00:01.266) 0:00:26.084 ****** 2026-02-17 04:13:47.263436 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:13:47.263444 | orchestrator | 2026-02-17 04:13:47.263452 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-17 04:13:47.263459 | orchestrator | Tuesday 17 February 2026 04:13:44 +0000 (0:00:00.723) 0:00:26.808 ****** 2026-02-17 04:13:47.263468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:47.263545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:47.263556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:47.263578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:47.263587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:47.263600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:47.263607 | orchestrator | 2026-02-17 04:13:47.263615 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-17 04:13:47.263625 | orchestrator | Tuesday 17 February 2026 04:13:46 +0000 (0:00:02.537) 0:00:29.345 ****** 2026-02-17 04:13:47.263633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 04:13:47.263640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 04:13:47.263647 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:13:47.263660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.510642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.510752 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:13:48.510788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.510803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.510815 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:13:48.510827 | orchestrator | 2026-02-17 04:13:48.510839 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-17 04:13:48.510851 | orchestrator | Tuesday 17 February 2026 04:13:47 +0000 (0:00:00.618) 0:00:29.964 ****** 2026-02-17 04:13:48.510863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.510913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.510927 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:13:48.510944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.510956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.510967 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:13:48.510978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-17 04:13:48.511060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-17 04:13:56.757691 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:13:56.757816 | orchestrator | 2026-02-17 04:13:56.757840 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-17 04:13:56.757854 | orchestrator | Tuesday 17 February 2026 04:13:48 +0000 (0:00:01.239) 0:00:31.203 ****** 2026-02-17 04:13:56.757884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:56.757901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:56.757913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:56.758076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:56.758127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:56.758141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:56.758153 | orchestrator | 2026-02-17 04:13:56.758164 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-17 04:13:56.758176 | orchestrator | Tuesday 17 February 2026 04:13:50 +0000 (0:00:02.462) 0:00:33.666 ****** 2026-02-17 04:13:56.758187 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-17 04:13:56.758198 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-17 04:13:56.758208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-17 04:13:56.758219 | orchestrator | 2026-02-17 04:13:56.758242 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-17 04:13:56.758254 | orchestrator | Tuesday 17 February 2026 04:13:52 +0000 (0:00:01.510) 0:00:35.177 ****** 2026-02-17 04:13:56.758266 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-17 04:13:56.758278 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-17 04:13:56.758291 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-17 04:13:56.758303 | orchestrator | 2026-02-17 04:13:56.758316 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-17 04:13:56.758328 | orchestrator | Tuesday 17 February 2026 04:13:54 +0000 (0:00:02.043) 0:00:37.220 ****** 2026-02-17 04:13:56.758341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:56.758362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785625 | orchestrator | 2026-02-17 04:13:58.785636 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-17 04:13:58.785647 | orchestrator | Tuesday 17 February 2026 04:13:56 +0000 (0:00:02.236) 0:00:39.457 ****** 2026-02-17 04:13:58.785657 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:13:58.785667 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:13:58.785677 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:13:58.785686 | orchestrator | 2026-02-17 04:13:58.785711 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-17 04:13:58.785728 | orchestrator | Tuesday 17 February 2026 04:13:57 +0000 (0:00:00.298) 0:00:39.755 ****** 2026-02-17 04:13:58.785739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:13:58.785804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:14:36.696895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-17 04:14:36.697039 | orchestrator | 2026-02-17 04:14:36.697057 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-17 04:14:36.697070 | orchestrator | Tuesday 17 February 2026 04:13:58 +0000 (0:00:01.726) 0:00:41.481 ****** 2026-02-17 04:14:36.697081 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:14:36.697093 | orchestrator | 2026-02-17 04:14:36.697104 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-17 04:14:36.697115 | orchestrator | Tuesday 17 February 2026 04:14:00 +0000 (0:00:02.067) 0:00:43.549 ****** 2026-02-17 04:14:36.697126 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:14:36.697136 | orchestrator | 2026-02-17 04:14:36.697147 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-17 04:14:36.697158 | orchestrator | Tuesday 17 February 2026 04:14:02 +0000 (0:00:02.151) 0:00:45.701 ****** 2026-02-17 04:14:36.697169 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:14:36.697180 | orchestrator | 2026-02-17 04:14:36.697191 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-17 04:14:36.697202 | orchestrator | Tuesday 17 February 2026 04:14:10 +0000 (0:00:07.574) 0:00:53.275 ****** 2026-02-17 04:14:36.697213 | orchestrator | 2026-02-17 04:14:36.697224 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-17 04:14:36.697235 | orchestrator | Tuesday 17 February 2026 04:14:10 +0000 (0:00:00.067) 0:00:53.343 ****** 2026-02-17 04:14:36.697245 | orchestrator | 2026-02-17 04:14:36.697256 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-17 04:14:36.697267 | orchestrator | Tuesday 17 February 2026 04:14:10 +0000 (0:00:00.083) 0:00:53.426 ****** 2026-02-17 04:14:36.697278 | orchestrator | 2026-02-17 04:14:36.697288 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-17 04:14:36.697299 | orchestrator | Tuesday 17 February 2026 04:14:10 +0000 (0:00:00.071) 0:00:53.498 ****** 2026-02-17 04:14:36.697310 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:14:36.697321 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:14:36.697332 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:14:36.697343 | orchestrator | 2026-02-17 04:14:36.697354 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-17 04:14:36.697365 | orchestrator | Tuesday 17 February 2026 04:14:22 +0000 (0:00:11.410) 0:01:04.909 ****** 2026-02-17 04:14:36.697375 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:14:36.697386 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:14:36.697397 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:14:36.697408 | orchestrator | 2026-02-17 04:14:36.697419 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:14:36.697431 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 04:14:36.697446 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 04:14:36.697458 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 04:14:36.697479 | orchestrator | 2026-02-17 04:14:36.697491 | orchestrator | 2026-02-17 04:14:36.697503 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:14:36.697516 | orchestrator | Tuesday 17 February 2026 04:14:36 +0000 (0:00:14.181) 0:01:19.091 ****** 2026-02-17 04:14:36.697529 | orchestrator | =============================================================================== 2026-02-17 04:14:36.697555 | orchestrator | skyline : Restart skyline-console container ---------------------------- 14.18s 2026-02-17 04:14:36.697568 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.41s 2026-02-17 04:14:36.697580 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.57s 2026-02-17 04:14:36.697592 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.20s 2026-02-17 04:14:36.697604 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.89s 2026-02-17 04:14:36.697616 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.66s 2026-02-17 04:14:36.697628 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.21s 2026-02-17 04:14:36.697640 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.15s 2026-02-17 04:14:36.697702 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.11s 2026-02-17 04:14:36.697717 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.54s 2026-02-17 04:14:36.697729 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.46s 2026-02-17 04:14:36.697740 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.24s 2026-02-17 04:14:36.697752 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.15s 2026-02-17 04:14:36.697764 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.07s 2026-02-17 04:14:36.697777 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.04s 2026-02-17 04:14:36.697789 | orchestrator | skyline : Check skyline container --------------------------------------- 1.73s 2026-02-17 04:14:36.697801 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.51s 2026-02-17 04:14:36.697813 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.27s 2026-02-17 04:14:36.697825 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.24s 2026-02-17 04:14:36.697837 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.72s 2026-02-17 04:14:38.991957 | orchestrator | 2026-02-17 04:14:38 | INFO  | Task 956fd84c-73a7-4a60-9cb2-6abd671cc7c8 (glance) was prepared for execution. 2026-02-17 04:14:38.992054 | orchestrator | 2026-02-17 04:14:38 | INFO  | It takes a moment until task 956fd84c-73a7-4a60-9cb2-6abd671cc7c8 (glance) has been started and output is visible here. 2026-02-17 04:15:12.096864 | orchestrator | 2026-02-17 04:15:12.096975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:15:12.096991 | orchestrator | 2026-02-17 04:15:12.097003 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:15:12.097014 | orchestrator | Tuesday 17 February 2026 04:14:43 +0000 (0:00:00.257) 0:00:00.257 ****** 2026-02-17 04:15:12.097026 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:15:12.097038 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:15:12.097049 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:15:12.097060 | orchestrator | 2026-02-17 04:15:12.097071 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:15:12.097082 | orchestrator | Tuesday 17 February 2026 04:14:43 +0000 (0:00:00.305) 0:00:00.563 ****** 2026-02-17 04:15:12.097093 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-17 04:15:12.097104 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-17 04:15:12.097115 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-17 04:15:12.097151 | orchestrator | 2026-02-17 04:15:12.097163 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-17 04:15:12.097174 | orchestrator | 2026-02-17 04:15:12.097185 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-17 04:15:12.097196 | orchestrator | Tuesday 17 February 2026 04:14:43 +0000 (0:00:00.434) 0:00:00.997 ****** 2026-02-17 04:15:12.097207 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:15:12.097218 | orchestrator | 2026-02-17 04:15:12.097229 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-17 04:15:12.097240 | orchestrator | Tuesday 17 February 2026 04:14:44 +0000 (0:00:00.588) 0:00:01.586 ****** 2026-02-17 04:15:12.097250 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-17 04:15:12.097262 | orchestrator | 2026-02-17 04:15:12.097272 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-17 04:15:12.097283 | orchestrator | Tuesday 17 February 2026 04:14:47 +0000 (0:00:03.432) 0:00:05.019 ****** 2026-02-17 04:15:12.097294 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-17 04:15:12.097306 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-17 04:15:12.097317 | orchestrator | 2026-02-17 04:15:12.097328 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-17 04:15:12.097339 | orchestrator | Tuesday 17 February 2026 04:14:54 +0000 (0:00:06.096) 0:00:11.115 ****** 2026-02-17 04:15:12.097350 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:15:12.097361 | orchestrator | 2026-02-17 04:15:12.097372 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-17 04:15:12.097383 | orchestrator | Tuesday 17 February 2026 04:14:57 +0000 (0:00:03.159) 0:00:14.274 ****** 2026-02-17 04:15:12.097394 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:15:12.097405 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-17 04:15:12.097416 | orchestrator | 2026-02-17 04:15:12.097469 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-17 04:15:12.097482 | orchestrator | Tuesday 17 February 2026 04:15:01 +0000 (0:00:03.887) 0:00:18.161 ****** 2026-02-17 04:15:12.097493 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:15:12.097504 | orchestrator | 2026-02-17 04:15:12.097515 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-17 04:15:12.097525 | orchestrator | Tuesday 17 February 2026 04:15:04 +0000 (0:00:03.104) 0:00:21.265 ****** 2026-02-17 04:15:12.097536 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-17 04:15:12.097547 | orchestrator | 2026-02-17 04:15:12.097558 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-17 04:15:12.097568 | orchestrator | Tuesday 17 February 2026 04:15:08 +0000 (0:00:03.850) 0:00:25.116 ****** 2026-02-17 04:15:12.097606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:15:12.097631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:15:12.097651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:15:12.097670 | orchestrator | 2026-02-17 04:15:12.097681 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-17 04:15:12.097692 | orchestrator | Tuesday 17 February 2026 04:15:11 +0000 (0:00:03.343) 0:00:28.460 ****** 2026-02-17 04:15:12.097703 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:15:12.097714 | orchestrator | 2026-02-17 04:15:12.097732 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-17 04:15:27.143219 | orchestrator | Tuesday 17 February 2026 04:15:12 +0000 (0:00:00.720) 0:00:29.181 ****** 2026-02-17 04:15:27.143321 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:15:27.143337 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:15:27.143392 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:15:27.143403 | orchestrator | 2026-02-17 04:15:27.143414 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-17 04:15:27.143425 | orchestrator | Tuesday 17 February 2026 04:15:15 +0000 (0:00:03.528) 0:00:32.709 ****** 2026-02-17 04:15:27.143440 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:15:27.143458 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:15:27.143475 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:15:27.143491 | orchestrator | 2026-02-17 04:15:27.143523 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-17 04:15:27.143541 | orchestrator | Tuesday 17 February 2026 04:15:17 +0000 (0:00:01.611) 0:00:34.320 ****** 2026-02-17 04:15:27.143555 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:15:27.143565 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:15:27.143575 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:15:27.143585 | orchestrator | 2026-02-17 04:15:27.143595 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-17 04:15:27.143605 | orchestrator | Tuesday 17 February 2026 04:15:18 +0000 (0:00:01.349) 0:00:35.670 ****** 2026-02-17 04:15:27.143615 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:15:27.143626 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:15:27.143636 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:15:27.143645 | orchestrator | 2026-02-17 04:15:27.143655 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-17 04:15:27.143666 | orchestrator | Tuesday 17 February 2026 04:15:19 +0000 (0:00:00.661) 0:00:36.331 ****** 2026-02-17 04:15:27.143676 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:15:27.143685 | orchestrator | 2026-02-17 04:15:27.143696 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-17 04:15:27.143705 | orchestrator | Tuesday 17 February 2026 04:15:19 +0000 (0:00:00.141) 0:00:36.473 ****** 2026-02-17 04:15:27.143715 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:15:27.143725 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:15:27.143735 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:15:27.143745 | orchestrator | 2026-02-17 04:15:27.143755 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-17 04:15:27.143767 | orchestrator | Tuesday 17 February 2026 04:15:19 +0000 (0:00:00.301) 0:00:36.774 ****** 2026-02-17 04:15:27.143795 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:15:27.143807 | orchestrator | 2026-02-17 04:15:27.143818 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-17 04:15:27.143829 | orchestrator | Tuesday 17 February 2026 04:15:20 +0000 (0:00:00.711) 0:00:37.486 ****** 2026-02-17 04:15:27.143870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:15:27.143906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:15:27.143926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:15:27.143946 | orchestrator | 2026-02-17 04:15:27.143958 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-17 04:15:27.143969 | orchestrator | Tuesday 17 February 2026 04:15:24 +0000 (0:00:03.757) 0:00:41.244 ****** 2026-02-17 04:15:27.143991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 04:15:30.588292 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:15:30.588472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 04:15:30.588520 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:15:30.588534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 04:15:30.588547 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:15:30.588558 | orchestrator | 2026-02-17 04:15:30.588571 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-17 04:15:30.588583 | orchestrator | Tuesday 17 February 2026 04:15:27 +0000 (0:00:02.982) 0:00:44.227 ****** 2026-02-17 04:15:30.588621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 04:15:30.588643 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:15:30.588656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 04:15:30.588668 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:15:30.588689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 04:16:03.899521 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:16:03.899623 | orchestrator | 2026-02-17 04:16:03.899640 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-17 04:16:03.899652 | orchestrator | Tuesday 17 February 2026 04:15:30 +0000 (0:00:03.442) 0:00:47.669 ****** 2026-02-17 04:16:03.899664 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:16:03.899675 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:16:03.899686 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:16:03.899697 | orchestrator | 2026-02-17 04:16:03.899722 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-17 04:16:03.899734 | orchestrator | Tuesday 17 February 2026 04:15:33 +0000 (0:00:03.172) 0:00:50.842 ****** 2026-02-17 04:16:03.899749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:16:03.899767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:16:03.899823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:16:03.899838 | orchestrator | 2026-02-17 04:16:03.899850 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-17 04:16:03.899861 | orchestrator | Tuesday 17 February 2026 04:15:37 +0000 (0:00:03.805) 0:00:54.648 ****** 2026-02-17 04:16:03.899872 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:16:03.899883 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:16:03.899894 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:16:03.899905 | orchestrator | 2026-02-17 04:16:03.899916 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-17 04:16:03.899927 | orchestrator | Tuesday 17 February 2026 04:15:43 +0000 (0:00:05.458) 0:01:00.106 ****** 2026-02-17 04:16:03.899938 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:16:03.899949 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:16:03.899960 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:16:03.899971 | orchestrator | 2026-02-17 04:16:03.899982 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-17 04:16:03.899992 | orchestrator | Tuesday 17 February 2026 04:15:46 +0000 (0:00:03.592) 0:01:03.699 ****** 2026-02-17 04:16:03.900003 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:16:03.900014 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:16:03.900025 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:16:03.900036 | orchestrator | 2026-02-17 04:16:03.900046 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-17 04:16:03.900057 | orchestrator | Tuesday 17 February 2026 04:15:49 +0000 (0:00:03.303) 0:01:07.002 ****** 2026-02-17 04:16:03.900068 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:16:03.900079 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:16:03.900093 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:16:03.900105 | orchestrator | 2026-02-17 04:16:03.900118 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-17 04:16:03.900151 | orchestrator | Tuesday 17 February 2026 04:15:53 +0000 (0:00:03.113) 0:01:10.116 ****** 2026-02-17 04:16:03.900164 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:16:03.900178 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:16:03.900198 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:16:03.900209 | orchestrator | 2026-02-17 04:16:03.900220 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-17 04:16:03.900231 | orchestrator | Tuesday 17 February 2026 04:15:56 +0000 (0:00:03.342) 0:01:13.458 ****** 2026-02-17 04:16:03.900242 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:16:03.900252 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:16:03.900263 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:16:03.900274 | orchestrator | 2026-02-17 04:16:03.900286 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-17 04:16:03.900296 | orchestrator | Tuesday 17 February 2026 04:15:56 +0000 (0:00:00.526) 0:01:13.985 ****** 2026-02-17 04:16:03.900307 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-17 04:16:03.900319 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:16:03.900330 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-17 04:16:03.900341 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:16:03.900351 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-17 04:16:03.900362 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:16:03.900373 | orchestrator | 2026-02-17 04:16:03.900384 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-17 04:16:03.900394 | orchestrator | Tuesday 17 February 2026 04:15:59 +0000 (0:00:03.047) 0:01:17.033 ****** 2026-02-17 04:16:03.900405 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:16:03.900416 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:16:03.900427 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:16:03.900438 | orchestrator | 2026-02-17 04:16:03.900449 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-17 04:16:03.900466 | orchestrator | Tuesday 17 February 2026 04:16:03 +0000 (0:00:03.948) 0:01:20.981 ****** 2026-02-17 04:17:11.237620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:17:11.237867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:17:11.237956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 04:17:11.237972 | orchestrator | 2026-02-17 04:17:11.237985 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-17 04:17:11.238009 | orchestrator | Tuesday 17 February 2026 04:16:07 +0000 (0:00:03.351) 0:01:24.332 ****** 2026-02-17 04:17:11.238085 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:17:11.238123 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:17:11.238141 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:17:11.238158 | orchestrator | 2026-02-17 04:17:11.238173 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-17 04:17:11.238185 | orchestrator | Tuesday 17 February 2026 04:16:07 +0000 (0:00:00.528) 0:01:24.861 ****** 2026-02-17 04:17:11.238196 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:17:11.238207 | orchestrator | 2026-02-17 04:17:11.238218 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-17 04:17:11.238230 | orchestrator | Tuesday 17 February 2026 04:16:09 +0000 (0:00:02.043) 0:01:26.904 ****** 2026-02-17 04:17:11.238251 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:17:11.238263 | orchestrator | 2026-02-17 04:17:11.238273 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-17 04:17:11.238284 | orchestrator | Tuesday 17 February 2026 04:16:12 +0000 (0:00:02.198) 0:01:29.102 ****** 2026-02-17 04:17:11.238296 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:17:11.238307 | orchestrator | 2026-02-17 04:17:11.238318 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-17 04:17:11.238328 | orchestrator | Tuesday 17 February 2026 04:16:14 +0000 (0:00:02.012) 0:01:31.115 ****** 2026-02-17 04:17:11.238338 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:17:11.238348 | orchestrator | 2026-02-17 04:17:11.238357 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-17 04:17:11.238367 | orchestrator | Tuesday 17 February 2026 04:16:40 +0000 (0:00:26.787) 0:01:57.902 ****** 2026-02-17 04:17:11.238376 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:17:11.238386 | orchestrator | 2026-02-17 04:17:11.238395 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-17 04:17:11.238405 | orchestrator | Tuesday 17 February 2026 04:16:42 +0000 (0:00:01.966) 0:01:59.868 ****** 2026-02-17 04:17:11.238415 | orchestrator | 2026-02-17 04:17:11.238424 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-17 04:17:11.238434 | orchestrator | Tuesday 17 February 2026 04:16:42 +0000 (0:00:00.070) 0:01:59.939 ****** 2026-02-17 04:17:11.238443 | orchestrator | 2026-02-17 04:17:11.238453 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-17 04:17:11.238462 | orchestrator | Tuesday 17 February 2026 04:16:42 +0000 (0:00:00.070) 0:02:00.009 ****** 2026-02-17 04:17:11.238472 | orchestrator | 2026-02-17 04:17:11.238482 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-17 04:17:11.238491 | orchestrator | Tuesday 17 February 2026 04:16:42 +0000 (0:00:00.071) 0:02:00.081 ****** 2026-02-17 04:17:11.238501 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:17:11.238510 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:17:11.238520 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:17:11.238530 | orchestrator | 2026-02-17 04:17:11.238539 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:17:11.238550 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-17 04:17:11.238561 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-17 04:17:11.238571 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-17 04:17:11.238580 | orchestrator | 2026-02-17 04:17:11.238590 | orchestrator | 2026-02-17 04:17:11.238600 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:17:11.238609 | orchestrator | Tuesday 17 February 2026 04:17:11 +0000 (0:00:28.225) 0:02:28.306 ****** 2026-02-17 04:17:11.238619 | orchestrator | =============================================================================== 2026-02-17 04:17:11.238628 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.23s 2026-02-17 04:17:11.238638 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.79s 2026-02-17 04:17:11.238648 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.10s 2026-02-17 04:17:11.238666 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.46s 2026-02-17 04:17:11.553986 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.95s 2026-02-17 04:17:11.554167 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.89s 2026-02-17 04:17:11.554204 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.85s 2026-02-17 04:17:11.554239 | orchestrator | glance : Copying over config.json files for services -------------------- 3.81s 2026-02-17 04:17:11.554250 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.76s 2026-02-17 04:17:11.554261 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.59s 2026-02-17 04:17:11.554272 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.53s 2026-02-17 04:17:11.554282 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.44s 2026-02-17 04:17:11.554294 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.43s 2026-02-17 04:17:11.554305 | orchestrator | glance : Check glance containers ---------------------------------------- 3.35s 2026-02-17 04:17:11.554315 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.34s 2026-02-17 04:17:11.554326 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.34s 2026-02-17 04:17:11.554337 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.30s 2026-02-17 04:17:11.554348 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.17s 2026-02-17 04:17:11.554359 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.16s 2026-02-17 04:17:11.554370 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.11s 2026-02-17 04:17:13.883508 | orchestrator | 2026-02-17 04:17:13 | INFO  | Task 045a4f37-4b57-4eec-89f9-adfd3cbc0718 (cinder) was prepared for execution. 2026-02-17 04:17:13.883593 | orchestrator | 2026-02-17 04:17:13 | INFO  | It takes a moment until task 045a4f37-4b57-4eec-89f9-adfd3cbc0718 (cinder) has been started and output is visible here. 2026-02-17 04:17:48.318079 | orchestrator | 2026-02-17 04:17:48.318197 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:17:48.318215 | orchestrator | 2026-02-17 04:17:48.318227 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:17:48.318239 | orchestrator | Tuesday 17 February 2026 04:17:18 +0000 (0:00:00.256) 0:00:00.256 ****** 2026-02-17 04:17:48.318250 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:17:48.318262 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:17:48.318273 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:17:48.318284 | orchestrator | 2026-02-17 04:17:48.318295 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:17:48.318306 | orchestrator | Tuesday 17 February 2026 04:17:18 +0000 (0:00:00.303) 0:00:00.560 ****** 2026-02-17 04:17:48.318317 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-17 04:17:48.318329 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-17 04:17:48.318340 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-17 04:17:48.318351 | orchestrator | 2026-02-17 04:17:48.318362 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-17 04:17:48.318373 | orchestrator | 2026-02-17 04:17:48.318384 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-17 04:17:48.318395 | orchestrator | Tuesday 17 February 2026 04:17:18 +0000 (0:00:00.432) 0:00:00.992 ****** 2026-02-17 04:17:48.318407 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:17:48.318419 | orchestrator | 2026-02-17 04:17:48.318430 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-17 04:17:48.318441 | orchestrator | Tuesday 17 February 2026 04:17:19 +0000 (0:00:00.531) 0:00:01.523 ****** 2026-02-17 04:17:48.318453 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-17 04:17:48.318464 | orchestrator | 2026-02-17 04:17:48.318475 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-17 04:17:48.318486 | orchestrator | Tuesday 17 February 2026 04:17:22 +0000 (0:00:03.363) 0:00:04.886 ****** 2026-02-17 04:17:48.318498 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-17 04:17:48.318532 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-17 04:17:48.318544 | orchestrator | 2026-02-17 04:17:48.318558 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-17 04:17:48.318570 | orchestrator | Tuesday 17 February 2026 04:17:29 +0000 (0:00:06.532) 0:00:11.418 ****** 2026-02-17 04:17:48.318583 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:17:48.318596 | orchestrator | 2026-02-17 04:17:48.318630 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-17 04:17:48.318643 | orchestrator | Tuesday 17 February 2026 04:17:32 +0000 (0:00:03.168) 0:00:14.587 ****** 2026-02-17 04:17:48.318655 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:17:48.318668 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-17 04:17:48.318680 | orchestrator | 2026-02-17 04:17:48.318693 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-17 04:17:48.318705 | orchestrator | Tuesday 17 February 2026 04:17:36 +0000 (0:00:03.902) 0:00:18.490 ****** 2026-02-17 04:17:48.318717 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:17:48.318730 | orchestrator | 2026-02-17 04:17:48.318742 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-17 04:17:48.318755 | orchestrator | Tuesday 17 February 2026 04:17:39 +0000 (0:00:03.023) 0:00:21.513 ****** 2026-02-17 04:17:48.318767 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-17 04:17:48.318779 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-17 04:17:48.318796 | orchestrator | 2026-02-17 04:17:48.318830 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-17 04:17:48.318842 | orchestrator | Tuesday 17 February 2026 04:17:46 +0000 (0:00:07.026) 0:00:28.539 ****** 2026-02-17 04:17:48.318856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:17:48.318891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:17:48.318904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:17:48.318925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:48.318938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:48.318971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:48.318984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:48.319003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:54.124998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:54.125115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:54.125149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:54.125163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:17:54.125175 | orchestrator | 2026-02-17 04:17:54.125189 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-17 04:17:54.125201 | orchestrator | Tuesday 17 February 2026 04:17:48 +0000 (0:00:02.080) 0:00:30.620 ****** 2026-02-17 04:17:54.125212 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:17:54.125225 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:17:54.125235 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:17:54.125246 | orchestrator | 2026-02-17 04:17:54.125258 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-17 04:17:54.125269 | orchestrator | Tuesday 17 February 2026 04:17:48 +0000 (0:00:00.495) 0:00:31.116 ****** 2026-02-17 04:17:54.125297 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:17:54.125309 | orchestrator | 2026-02-17 04:17:54.125331 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-17 04:17:54.125343 | orchestrator | Tuesday 17 February 2026 04:17:49 +0000 (0:00:00.545) 0:00:31.661 ****** 2026-02-17 04:17:54.125376 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-17 04:17:54.125389 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-17 04:17:54.125400 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-17 04:17:54.125411 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-17 04:17:54.125422 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-17 04:17:54.125433 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-17 04:17:54.125443 | orchestrator | 2026-02-17 04:17:54.125454 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-17 04:17:54.125465 | orchestrator | Tuesday 17 February 2026 04:17:51 +0000 (0:00:01.607) 0:00:33.269 ****** 2026-02-17 04:17:54.125496 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-17 04:17:54.125511 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-17 04:17:54.125530 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-17 04:17:54.125544 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-17 04:17:54.125573 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-17 04:18:04.823839 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-17 04:18:04.823949 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-17 04:18:04.823980 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-17 04:18:04.823992 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-17 04:18:04.824022 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-17 04:18:04.824051 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-17 04:18:04.824062 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-17 04:18:04.824072 | orchestrator | 2026-02-17 04:18:04.824083 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-17 04:18:04.824094 | orchestrator | Tuesday 17 February 2026 04:17:54 +0000 (0:00:03.360) 0:00:36.630 ****** 2026-02-17 04:18:04.824104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:18:04.824115 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:18:04.824125 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-17 04:18:04.824135 | orchestrator | 2026-02-17 04:18:04.824145 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-17 04:18:04.824154 | orchestrator | Tuesday 17 February 2026 04:17:55 +0000 (0:00:01.578) 0:00:38.209 ****** 2026-02-17 04:18:04.824165 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-17 04:18:04.824180 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-17 04:18:04.824191 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-17 04:18:04.824200 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-17 04:18:04.824210 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-17 04:18:04.824220 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-17 04:18:04.824236 | orchestrator | 2026-02-17 04:18:04.824246 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-17 04:18:04.824256 | orchestrator | Tuesday 17 February 2026 04:17:58 +0000 (0:00:02.647) 0:00:40.856 ****** 2026-02-17 04:18:04.824265 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-17 04:18:04.824276 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-17 04:18:04.824285 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-17 04:18:04.824295 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-17 04:18:04.824305 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-17 04:18:04.824314 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-17 04:18:04.824324 | orchestrator | 2026-02-17 04:18:04.824334 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-17 04:18:04.824343 | orchestrator | Tuesday 17 February 2026 04:17:59 +0000 (0:00:01.059) 0:00:41.916 ****** 2026-02-17 04:18:04.824353 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:18:04.824363 | orchestrator | 2026-02-17 04:18:04.824375 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-17 04:18:04.824387 | orchestrator | Tuesday 17 February 2026 04:17:59 +0000 (0:00:00.147) 0:00:42.063 ****** 2026-02-17 04:18:04.824398 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:18:04.824409 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:18:04.824421 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:18:04.824432 | orchestrator | 2026-02-17 04:18:04.824443 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-17 04:18:04.824455 | orchestrator | Tuesday 17 February 2026 04:18:00 +0000 (0:00:00.482) 0:00:42.545 ****** 2026-02-17 04:18:04.824466 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:18:04.824478 | orchestrator | 2026-02-17 04:18:04.824489 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-17 04:18:04.824500 | orchestrator | Tuesday 17 February 2026 04:18:00 +0000 (0:00:00.547) 0:00:43.093 ****** 2026-02-17 04:18:04.824518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:05.679017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:05.679141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:05.679181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:05.679379 | orchestrator | 2026-02-17 04:18:05.679393 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-17 04:18:05.679406 | orchestrator | Tuesday 17 February 2026 04:18:04 +0000 (0:00:04.043) 0:00:47.137 ****** 2026-02-17 04:18:05.679427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:05.779831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.779912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.779923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.779933 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:18:05.779945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:05.779955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.779979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.780010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.780020 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:18:05.780029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:05.780038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.780048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.780057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:05.780072 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:18:05.780081 | orchestrator | 2026-02-17 04:18:05.780091 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-17 04:18:05.780107 | orchestrator | Tuesday 17 February 2026 04:18:05 +0000 (0:00:00.864) 0:00:48.002 ****** 2026-02-17 04:18:06.338890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:06.338981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:06.338996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:06.339006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:06.339016 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:18:06.339027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:06.339085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:06.339105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:06.339116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:06.339128 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:18:06.339140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:06.339151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:06.339180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:10.969420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:10.969584 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:18:10.969615 | orchestrator | 2026-02-17 04:18:10.969636 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-17 04:18:10.969656 | orchestrator | Tuesday 17 February 2026 04:18:06 +0000 (0:00:00.877) 0:00:48.879 ****** 2026-02-17 04:18:10.969676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:10.969699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:10.969755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:10.969798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:10.969821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:10.969834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:10.969846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:10.969858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:10.969879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:10.969900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:23.489632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:23.489726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:23.489738 | orchestrator | 2026-02-17 04:18:23.489747 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-17 04:18:23.489755 | orchestrator | Tuesday 17 February 2026 04:18:11 +0000 (0:00:04.395) 0:00:53.274 ****** 2026-02-17 04:18:23.489762 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-17 04:18:23.489770 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-17 04:18:23.489777 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-17 04:18:23.489783 | orchestrator | 2026-02-17 04:18:23.489790 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-17 04:18:23.489797 | orchestrator | Tuesday 17 February 2026 04:18:12 +0000 (0:00:01.834) 0:00:55.109 ****** 2026-02-17 04:18:23.489823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:23.489832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:23.489857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:23.489866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:23.489873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:23.489885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:23.489892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:23.489900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:23.489916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:25.894503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:25.894611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:25.894652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:25.894665 | orchestrator | 2026-02-17 04:18:25.894678 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-17 04:18:25.894691 | orchestrator | Tuesday 17 February 2026 04:18:23 +0000 (0:00:10.680) 0:01:05.789 ****** 2026-02-17 04:18:25.894702 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:18:25.894715 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:18:25.894726 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:18:25.894736 | orchestrator | 2026-02-17 04:18:25.894748 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-17 04:18:25.894759 | orchestrator | Tuesday 17 February 2026 04:18:25 +0000 (0:00:01.530) 0:01:07.320 ****** 2026-02-17 04:18:25.894772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:25.894847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:25.894881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:25.894902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:25.894914 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:18:25.894926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:25.894938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:25.894949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:25.894976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:29.336642 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:18:29.336763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-17 04:18:29.336803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:18:29.336815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 04:18:29.336825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 04:18:29.336834 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:18:29.336843 | orchestrator | 2026-02-17 04:18:29.336852 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-17 04:18:29.336863 | orchestrator | Tuesday 17 February 2026 04:18:25 +0000 (0:00:00.886) 0:01:08.207 ****** 2026-02-17 04:18:29.336871 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:18:29.336881 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:18:29.336890 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:18:29.336898 | orchestrator | 2026-02-17 04:18:29.336907 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-17 04:18:29.336915 | orchestrator | Tuesday 17 February 2026 04:18:26 +0000 (0:00:00.544) 0:01:08.751 ****** 2026-02-17 04:18:29.337006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:29.337030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:29.337040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-17 04:18:29.337050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:29.337059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:29.337074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:18:29.337098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:02.484989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:02.485180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:02.485200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:02.485214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:02.485242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:02.485355 | orchestrator | 2026-02-17 04:20:02.485382 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-17 04:20:02.485403 | orchestrator | Tuesday 17 February 2026 04:18:29 +0000 (0:00:02.886) 0:01:11.638 ****** 2026-02-17 04:20:02.485421 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:20:02.485441 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:20:02.485458 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:20:02.485477 | orchestrator | 2026-02-17 04:20:02.485494 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-17 04:20:02.485514 | orchestrator | Tuesday 17 February 2026 04:18:29 +0000 (0:00:00.324) 0:01:11.963 ****** 2026-02-17 04:20:02.485535 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:20:02.485554 | orchestrator | 2026-02-17 04:20:02.485592 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-17 04:20:02.485606 | orchestrator | Tuesday 17 February 2026 04:18:31 +0000 (0:00:02.016) 0:01:13.980 ****** 2026-02-17 04:20:02.485620 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:20:02.485633 | orchestrator | 2026-02-17 04:20:02.485646 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-17 04:20:02.485658 | orchestrator | Tuesday 17 February 2026 04:18:33 +0000 (0:00:02.121) 0:01:16.101 ****** 2026-02-17 04:20:02.485671 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:20:02.485684 | orchestrator | 2026-02-17 04:20:02.485697 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-17 04:20:02.485709 | orchestrator | Tuesday 17 February 2026 04:18:52 +0000 (0:00:18.737) 0:01:34.838 ****** 2026-02-17 04:20:02.485722 | orchestrator | 2026-02-17 04:20:02.485734 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-17 04:20:02.485747 | orchestrator | Tuesday 17 February 2026 04:18:52 +0000 (0:00:00.068) 0:01:34.906 ****** 2026-02-17 04:20:02.485759 | orchestrator | 2026-02-17 04:20:02.485771 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-17 04:20:02.485784 | orchestrator | Tuesday 17 February 2026 04:18:52 +0000 (0:00:00.067) 0:01:34.974 ****** 2026-02-17 04:20:02.485797 | orchestrator | 2026-02-17 04:20:02.485809 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-17 04:20:02.485821 | orchestrator | Tuesday 17 February 2026 04:18:52 +0000 (0:00:00.069) 0:01:35.044 ****** 2026-02-17 04:20:02.485834 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:20:02.485847 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:20:02.485859 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:20:02.485872 | orchestrator | 2026-02-17 04:20:02.485884 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-17 04:20:02.485895 | orchestrator | Tuesday 17 February 2026 04:19:18 +0000 (0:00:25.867) 0:02:00.911 ****** 2026-02-17 04:20:02.485905 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:20:02.485916 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:20:02.485927 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:20:02.485938 | orchestrator | 2026-02-17 04:20:02.485948 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-17 04:20:02.485959 | orchestrator | Tuesday 17 February 2026 04:19:28 +0000 (0:00:10.092) 0:02:11.004 ****** 2026-02-17 04:20:02.485970 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:20:02.485981 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:20:02.485992 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:20:02.486014 | orchestrator | 2026-02-17 04:20:02.486118 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-17 04:20:02.486132 | orchestrator | Tuesday 17 February 2026 04:19:51 +0000 (0:00:22.458) 0:02:33.463 ****** 2026-02-17 04:20:02.486143 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:20:02.486154 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:20:02.486165 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:20:02.486176 | orchestrator | 2026-02-17 04:20:02.486186 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-17 04:20:02.486199 | orchestrator | Tuesday 17 February 2026 04:20:02 +0000 (0:00:10.950) 0:02:44.413 ****** 2026-02-17 04:20:02.486210 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:20:02.486221 | orchestrator | 2026-02-17 04:20:02.486232 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:20:02.486243 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-17 04:20:02.486256 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 04:20:02.486267 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 04:20:02.486278 | orchestrator | 2026-02-17 04:20:02.486289 | orchestrator | 2026-02-17 04:20:02.486300 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:20:02.486311 | orchestrator | Tuesday 17 February 2026 04:20:02 +0000 (0:00:00.281) 0:02:44.695 ****** 2026-02-17 04:20:02.486322 | orchestrator | =============================================================================== 2026-02-17 04:20:02.486341 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.87s 2026-02-17 04:20:02.486352 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.46s 2026-02-17 04:20:02.486363 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.74s 2026-02-17 04:20:02.486374 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.95s 2026-02-17 04:20:02.486385 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.68s 2026-02-17 04:20:02.486395 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.09s 2026-02-17 04:20:02.486406 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.03s 2026-02-17 04:20:02.486417 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.53s 2026-02-17 04:20:02.486427 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.40s 2026-02-17 04:20:02.486438 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.04s 2026-02-17 04:20:02.486449 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.90s 2026-02-17 04:20:02.486460 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.36s 2026-02-17 04:20:02.486470 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.36s 2026-02-17 04:20:02.486489 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.17s 2026-02-17 04:20:02.486518 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.02s 2026-02-17 04:20:02.891530 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.89s 2026-02-17 04:20:02.891656 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.65s 2026-02-17 04:20:02.891678 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.12s 2026-02-17 04:20:02.891697 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.08s 2026-02-17 04:20:02.891715 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.02s 2026-02-17 04:20:05.555014 | orchestrator | 2026-02-17 04:20:05 | INFO  | Task 506c7661-d0d8-4f05-abd1-cf3561cc36e3 (barbican) was prepared for execution. 2026-02-17 04:20:05.555186 | orchestrator | 2026-02-17 04:20:05 | INFO  | It takes a moment until task 506c7661-d0d8-4f05-abd1-cf3561cc36e3 (barbican) has been started and output is visible here. 2026-02-17 04:20:48.441362 | orchestrator | 2026-02-17 04:20:48.441479 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:20:48.441495 | orchestrator | 2026-02-17 04:20:48.441508 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:20:48.441519 | orchestrator | Tuesday 17 February 2026 04:20:10 +0000 (0:00:00.315) 0:00:00.315 ****** 2026-02-17 04:20:48.441531 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:20:48.441544 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:20:48.441555 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:20:48.441566 | orchestrator | 2026-02-17 04:20:48.441578 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:20:48.441589 | orchestrator | Tuesday 17 February 2026 04:20:10 +0000 (0:00:00.332) 0:00:00.647 ****** 2026-02-17 04:20:48.441600 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-17 04:20:48.441611 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-17 04:20:48.441623 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-17 04:20:48.441634 | orchestrator | 2026-02-17 04:20:48.441645 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-17 04:20:48.441656 | orchestrator | 2026-02-17 04:20:48.441667 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-17 04:20:48.441678 | orchestrator | Tuesday 17 February 2026 04:20:11 +0000 (0:00:00.462) 0:00:01.109 ****** 2026-02-17 04:20:48.441690 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:20:48.441702 | orchestrator | 2026-02-17 04:20:48.441713 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-17 04:20:48.441724 | orchestrator | Tuesday 17 February 2026 04:20:11 +0000 (0:00:00.544) 0:00:01.653 ****** 2026-02-17 04:20:48.441736 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-17 04:20:48.441747 | orchestrator | 2026-02-17 04:20:48.441758 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-17 04:20:48.441769 | orchestrator | Tuesday 17 February 2026 04:20:14 +0000 (0:00:03.328) 0:00:04.982 ****** 2026-02-17 04:20:48.441780 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-17 04:20:48.441791 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-17 04:20:48.441802 | orchestrator | 2026-02-17 04:20:48.441813 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-17 04:20:48.441824 | orchestrator | Tuesday 17 February 2026 04:20:21 +0000 (0:00:06.182) 0:00:11.164 ****** 2026-02-17 04:20:48.441836 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:20:48.441847 | orchestrator | 2026-02-17 04:20:48.441858 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-17 04:20:48.441869 | orchestrator | Tuesday 17 February 2026 04:20:24 +0000 (0:00:03.131) 0:00:14.296 ****** 2026-02-17 04:20:48.441880 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:20:48.441891 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-17 04:20:48.441902 | orchestrator | 2026-02-17 04:20:48.441929 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-17 04:20:48.441941 | orchestrator | Tuesday 17 February 2026 04:20:28 +0000 (0:00:03.861) 0:00:18.157 ****** 2026-02-17 04:20:48.441976 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:20:48.441988 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-17 04:20:48.441999 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-17 04:20:48.442092 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-17 04:20:48.442107 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-17 04:20:48.442118 | orchestrator | 2026-02-17 04:20:48.442128 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-17 04:20:48.442139 | orchestrator | Tuesday 17 February 2026 04:20:43 +0000 (0:00:14.898) 0:00:33.056 ****** 2026-02-17 04:20:48.442150 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-17 04:20:48.442161 | orchestrator | 2026-02-17 04:20:48.442172 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-17 04:20:48.442182 | orchestrator | Tuesday 17 February 2026 04:20:46 +0000 (0:00:03.690) 0:00:36.746 ****** 2026-02-17 04:20:48.442196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:20:48.442229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:20:48.442242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:20:48.442261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:48.442283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:48.442295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:48.442316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:53.988512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:53.988631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:53.988650 | orchestrator | 2026-02-17 04:20:53.988664 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-17 04:20:53.988677 | orchestrator | Tuesday 17 February 2026 04:20:48 +0000 (0:00:01.668) 0:00:38.414 ****** 2026-02-17 04:20:53.988688 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-17 04:20:53.988700 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-17 04:20:53.988710 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-17 04:20:53.988749 | orchestrator | 2026-02-17 04:20:53.988768 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-17 04:20:53.988787 | orchestrator | Tuesday 17 February 2026 04:20:49 +0000 (0:00:01.086) 0:00:39.501 ****** 2026-02-17 04:20:53.988805 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:20:53.988823 | orchestrator | 2026-02-17 04:20:53.988841 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-17 04:20:53.988859 | orchestrator | Tuesday 17 February 2026 04:20:49 +0000 (0:00:00.313) 0:00:39.815 ****** 2026-02-17 04:20:53.988896 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:20:53.988917 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:20:53.988968 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:20:53.988988 | orchestrator | 2026-02-17 04:20:53.989002 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-17 04:20:53.989015 | orchestrator | Tuesday 17 February 2026 04:20:50 +0000 (0:00:00.287) 0:00:40.102 ****** 2026-02-17 04:20:53.989028 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:20:53.989042 | orchestrator | 2026-02-17 04:20:53.989061 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-17 04:20:53.989079 | orchestrator | Tuesday 17 February 2026 04:20:50 +0000 (0:00:00.526) 0:00:40.629 ****** 2026-02-17 04:20:53.989099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:20:53.989147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:20:53.989170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:20:53.989204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:53.989225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:53.989238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:53.989249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:53.989347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:55.376877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:20:55.377065 | orchestrator | 2026-02-17 04:20:55.377088 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-17 04:20:55.377101 | orchestrator | Tuesday 17 February 2026 04:20:53 +0000 (0:00:03.325) 0:00:43.954 ****** 2026-02-17 04:20:55.377129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:20:55.377143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:20:55.377156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:20:55.377168 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:20:55.377197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:20:55.377243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:20:55.377264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:20:55.377276 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:20:55.377294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:20:55.377306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:20:55.377318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:20:55.377329 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:20:55.377341 | orchestrator | 2026-02-17 04:20:55.377352 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-17 04:20:55.377363 | orchestrator | Tuesday 17 February 2026 04:20:54 +0000 (0:00:00.602) 0:00:44.556 ****** 2026-02-17 04:20:55.377385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:20:58.831039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:20:58.831191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:20:58.831210 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:20:58.831225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:20:58.831238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:20:58.831249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:20:58.832162 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:20:58.832242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:20:58.832266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:20:58.832299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:20:58.832319 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:20:58.832337 | orchestrator | 2026-02-17 04:20:58.832354 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-17 04:20:58.832373 | orchestrator | Tuesday 17 February 2026 04:20:55 +0000 (0:00:00.805) 0:00:45.362 ****** 2026-02-17 04:20:58.832392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:20:58.832412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:20:58.832463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:21:08.083928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.084063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.084082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.084096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.084132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.084144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.084157 | orchestrator | 2026-02-17 04:21:08.084170 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-17 04:21:08.084182 | orchestrator | Tuesday 17 February 2026 04:20:58 +0000 (0:00:03.448) 0:00:48.810 ****** 2026-02-17 04:21:08.084194 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:21:08.084206 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:21:08.084218 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:21:08.084229 | orchestrator | 2026-02-17 04:21:08.084261 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-17 04:21:08.084273 | orchestrator | Tuesday 17 February 2026 04:21:00 +0000 (0:00:01.518) 0:00:50.328 ****** 2026-02-17 04:21:08.084284 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:21:08.084295 | orchestrator | 2026-02-17 04:21:08.084306 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-17 04:21:08.084317 | orchestrator | Tuesday 17 February 2026 04:21:01 +0000 (0:00:00.944) 0:00:51.273 ****** 2026-02-17 04:21:08.084328 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:21:08.084339 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:21:08.084350 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:21:08.084361 | orchestrator | 2026-02-17 04:21:08.084372 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-17 04:21:08.084383 | orchestrator | Tuesday 17 February 2026 04:21:01 +0000 (0:00:00.562) 0:00:51.836 ****** 2026-02-17 04:21:08.084505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:21:08.084531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:21:08.084555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:21:08.084580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.927622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.927707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.927718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.927742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.927749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:08.927757 | orchestrator | 2026-02-17 04:21:08.927765 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-17 04:21:08.927772 | orchestrator | Tuesday 17 February 2026 04:21:08 +0000 (0:00:06.225) 0:00:58.061 ****** 2026-02-17 04:21:08.927790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:21:08.927802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:21:08.927809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:21:08.927825 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:21:08.927832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:21:08.927839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:21:08.927846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:21:08.927852 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:21:08.927869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-17 04:21:11.180851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:21:11.181034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:21:11.181053 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:21:11.181068 | orchestrator | 2026-02-17 04:21:11.181080 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-17 04:21:11.181092 | orchestrator | Tuesday 17 February 2026 04:21:08 +0000 (0:00:00.846) 0:00:58.908 ****** 2026-02-17 04:21:11.181105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:21:11.181117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:21:11.181164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-17 04:21:11.181184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:11.181197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:11.181208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:11.181220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:11.181232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:11.181244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:21:11.181255 | orchestrator | 2026-02-17 04:21:11.181272 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-17 04:21:11.181298 | orchestrator | Tuesday 17 February 2026 04:21:11 +0000 (0:00:02.249) 0:01:01.158 ****** 2026-02-17 04:21:55.015008 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:21:55.015122 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:21:55.015138 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:21:55.015151 | orchestrator | 2026-02-17 04:21:55.015163 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-17 04:21:55.015175 | orchestrator | Tuesday 17 February 2026 04:21:11 +0000 (0:00:00.341) 0:01:01.500 ****** 2026-02-17 04:21:55.015187 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:21:55.015214 | orchestrator | 2026-02-17 04:21:55.015236 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-17 04:21:55.015248 | orchestrator | Tuesday 17 February 2026 04:21:13 +0000 (0:00:02.060) 0:01:03.560 ****** 2026-02-17 04:21:55.015259 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:21:55.015269 | orchestrator | 2026-02-17 04:21:55.015280 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-17 04:21:55.015291 | orchestrator | Tuesday 17 February 2026 04:21:15 +0000 (0:00:02.189) 0:01:05.750 ****** 2026-02-17 04:21:55.015302 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:21:55.015313 | orchestrator | 2026-02-17 04:21:55.015324 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-17 04:21:55.015335 | orchestrator | Tuesday 17 February 2026 04:21:27 +0000 (0:00:12.219) 0:01:17.969 ****** 2026-02-17 04:21:55.015347 | orchestrator | 2026-02-17 04:21:55.015357 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-17 04:21:55.015368 | orchestrator | Tuesday 17 February 2026 04:21:28 +0000 (0:00:00.067) 0:01:18.037 ****** 2026-02-17 04:21:55.015379 | orchestrator | 2026-02-17 04:21:55.015390 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-17 04:21:55.015401 | orchestrator | Tuesday 17 February 2026 04:21:28 +0000 (0:00:00.068) 0:01:18.106 ****** 2026-02-17 04:21:55.015412 | orchestrator | 2026-02-17 04:21:55.015422 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-17 04:21:55.015433 | orchestrator | Tuesday 17 February 2026 04:21:28 +0000 (0:00:00.079) 0:01:18.185 ****** 2026-02-17 04:21:55.015444 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:21:55.015456 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:21:55.015467 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:21:55.015477 | orchestrator | 2026-02-17 04:21:55.015489 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-17 04:21:55.015500 | orchestrator | Tuesday 17 February 2026 04:21:39 +0000 (0:00:11.359) 0:01:29.545 ****** 2026-02-17 04:21:55.015511 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:21:55.015522 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:21:55.015533 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:21:55.015546 | orchestrator | 2026-02-17 04:21:55.015558 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-17 04:21:55.015572 | orchestrator | Tuesday 17 February 2026 04:21:49 +0000 (0:00:09.704) 0:01:39.249 ****** 2026-02-17 04:21:55.015585 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:21:55.015598 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:21:55.015611 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:21:55.015623 | orchestrator | 2026-02-17 04:21:55.015636 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:21:55.015650 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 04:21:55.015664 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 04:21:55.015677 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 04:21:55.015690 | orchestrator | 2026-02-17 04:21:55.015727 | orchestrator | 2026-02-17 04:21:55.015740 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:21:55.015753 | orchestrator | Tuesday 17 February 2026 04:21:54 +0000 (0:00:05.421) 0:01:44.671 ****** 2026-02-17 04:21:55.015786 | orchestrator | =============================================================================== 2026-02-17 04:21:55.015799 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.90s 2026-02-17 04:21:55.015812 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.22s 2026-02-17 04:21:55.015825 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.36s 2026-02-17 04:21:55.015838 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.70s 2026-02-17 04:21:55.015850 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.23s 2026-02-17 04:21:55.015863 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.18s 2026-02-17 04:21:55.015876 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.42s 2026-02-17 04:21:55.015889 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.86s 2026-02-17 04:21:55.015900 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.69s 2026-02-17 04:21:55.015911 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.45s 2026-02-17 04:21:55.015922 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.33s 2026-02-17 04:21:55.015933 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.33s 2026-02-17 04:21:55.015944 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.13s 2026-02-17 04:21:55.015970 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.25s 2026-02-17 04:21:55.015982 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.19s 2026-02-17 04:21:55.016011 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.06s 2026-02-17 04:21:55.016023 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.67s 2026-02-17 04:21:55.016034 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.52s 2026-02-17 04:21:55.016045 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.09s 2026-02-17 04:21:55.016056 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.94s 2026-02-17 04:21:57.304033 | orchestrator | 2026-02-17 04:21:57 | INFO  | Task 2820ebb6-dccf-4079-b8ef-95490acffdcc (designate) was prepared for execution. 2026-02-17 04:21:57.304152 | orchestrator | 2026-02-17 04:21:57 | INFO  | It takes a moment until task 2820ebb6-dccf-4079-b8ef-95490acffdcc (designate) has been started and output is visible here. 2026-02-17 04:22:28.333031 | orchestrator | 2026-02-17 04:22:28.333183 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:22:28.333202 | orchestrator | 2026-02-17 04:22:28.333214 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:22:28.333226 | orchestrator | Tuesday 17 February 2026 04:22:01 +0000 (0:00:00.266) 0:00:00.266 ****** 2026-02-17 04:22:28.333238 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:22:28.333250 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:22:28.333261 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:22:28.333272 | orchestrator | 2026-02-17 04:22:28.333283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:22:28.333294 | orchestrator | Tuesday 17 February 2026 04:22:01 +0000 (0:00:00.317) 0:00:00.583 ****** 2026-02-17 04:22:28.333306 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-17 04:22:28.333317 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-17 04:22:28.333328 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-17 04:22:28.333339 | orchestrator | 2026-02-17 04:22:28.333350 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-17 04:22:28.333387 | orchestrator | 2026-02-17 04:22:28.333398 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-17 04:22:28.333409 | orchestrator | Tuesday 17 February 2026 04:22:02 +0000 (0:00:00.444) 0:00:01.028 ****** 2026-02-17 04:22:28.333421 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:22:28.333432 | orchestrator | 2026-02-17 04:22:28.333443 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-17 04:22:28.333454 | orchestrator | Tuesday 17 February 2026 04:22:02 +0000 (0:00:00.557) 0:00:01.586 ****** 2026-02-17 04:22:28.333465 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-17 04:22:28.333476 | orchestrator | 2026-02-17 04:22:28.333487 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-17 04:22:28.333497 | orchestrator | Tuesday 17 February 2026 04:22:06 +0000 (0:00:03.329) 0:00:04.916 ****** 2026-02-17 04:22:28.333508 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-17 04:22:28.333519 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-17 04:22:28.333530 | orchestrator | 2026-02-17 04:22:28.333541 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-17 04:22:28.333554 | orchestrator | Tuesday 17 February 2026 04:22:12 +0000 (0:00:06.265) 0:00:11.181 ****** 2026-02-17 04:22:28.333567 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:22:28.333580 | orchestrator | 2026-02-17 04:22:28.333593 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-17 04:22:28.333605 | orchestrator | Tuesday 17 February 2026 04:22:15 +0000 (0:00:03.261) 0:00:14.442 ****** 2026-02-17 04:22:28.333618 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:22:28.333630 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-17 04:22:28.333642 | orchestrator | 2026-02-17 04:22:28.333655 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-17 04:22:28.333667 | orchestrator | Tuesday 17 February 2026 04:22:19 +0000 (0:00:03.922) 0:00:18.364 ****** 2026-02-17 04:22:28.333679 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:22:28.333692 | orchestrator | 2026-02-17 04:22:28.333704 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-17 04:22:28.333739 | orchestrator | Tuesday 17 February 2026 04:22:22 +0000 (0:00:03.082) 0:00:21.447 ****** 2026-02-17 04:22:28.333752 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-17 04:22:28.333765 | orchestrator | 2026-02-17 04:22:28.333778 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-17 04:22:28.333790 | orchestrator | Tuesday 17 February 2026 04:22:26 +0000 (0:00:03.701) 0:00:25.149 ****** 2026-02-17 04:22:28.333818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:28.333856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:28.333878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:28.333891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:28.333904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:28.333920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:28.333933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:28.333959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:34.353953 | orchestrator | 2026-02-17 04:22:34.353966 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-17 04:22:34.353978 | orchestrator | Tuesday 17 February 2026 04:22:29 +0000 (0:00:02.771) 0:00:27.920 ****** 2026-02-17 04:22:34.353990 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:22:34.354003 | orchestrator | 2026-02-17 04:22:34.354014 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-17 04:22:34.354086 | orchestrator | Tuesday 17 February 2026 04:22:29 +0000 (0:00:00.139) 0:00:28.060 ****** 2026-02-17 04:22:34.354098 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:22:34.354109 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:22:34.354120 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:22:34.354132 | orchestrator | 2026-02-17 04:22:34.354143 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-17 04:22:34.354164 | orchestrator | Tuesday 17 February 2026 04:22:29 +0000 (0:00:00.488) 0:00:28.549 ****** 2026-02-17 04:22:34.354178 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:22:34.354190 | orchestrator | 2026-02-17 04:22:34.354203 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-17 04:22:34.354223 | orchestrator | Tuesday 17 February 2026 04:22:30 +0000 (0:00:00.548) 0:00:29.097 ****** 2026-02-17 04:22:34.354238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:34.354263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:36.090353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:36.090485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.090757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.931427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.931555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.931597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:36.931607 | orchestrator | 2026-02-17 04:22:36.931616 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-17 04:22:36.931625 | orchestrator | Tuesday 17 February 2026 04:22:36 +0000 (0:00:05.829) 0:00:34.926 ****** 2026-02-17 04:22:36.931654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:22:36.931663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:22:36.931691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:22:36.931941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:22:36.931970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:22:36.932004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:22:36.932011 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:22:36.932035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:22:36.932042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:22:36.932050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:22:36.932081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:22:37.658293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:22:37.658420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:22:37.658438 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:22:37.658468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:22:37.658482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:22:37.658494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:22:37.658506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:22:37.658543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:22:37.658556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:22:37.658568 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:22:37.658580 | orchestrator | 2026-02-17 04:22:37.658593 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-17 04:22:37.658605 | orchestrator | Tuesday 17 February 2026 04:22:37 +0000 (0:00:00.948) 0:00:35.875 ****** 2026-02-17 04:22:37.658622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:22:37.658635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:22:37.658647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:22:37.658666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001204 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:22:38.001235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:22:38.001249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:22:38.001262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001357 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:22:38.001374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:22:38.001386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:22:38.001398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:22:38.001436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:22:42.455278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:22:42.455402 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:22:42.455422 | orchestrator | 2026-02-17 04:22:42.455436 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-17 04:22:42.455448 | orchestrator | Tuesday 17 February 2026 04:22:37 +0000 (0:00:00.957) 0:00:36.833 ****** 2026-02-17 04:22:42.455477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:42.455491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:42.455526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:42.455555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:42.455571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:42.455588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:42.455600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:42.455612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:42.455731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:42.455748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:42.455777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:53.610478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:53.611711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:53.611801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:53.611826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:53.611879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:53.611901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:53.611951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:22:53.611973 | orchestrator | 2026-02-17 04:22:53.611996 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-17 04:22:53.612017 | orchestrator | Tuesday 17 February 2026 04:22:44 +0000 (0:00:06.264) 0:00:43.097 ****** 2026-02-17 04:22:53.612048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:53.612062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:53.612084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:22:53.612096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:22:53.612118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:01.490860 | orchestrator | 2026-02-17 04:23:01.490875 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-17 04:23:01.490891 | orchestrator | Tuesday 17 February 2026 04:22:57 +0000 (0:00:13.749) 0:00:56.847 ****** 2026-02-17 04:23:01.490915 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-17 04:23:05.682071 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-17 04:23:05.682168 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-17 04:23:05.682179 | orchestrator | 2026-02-17 04:23:05.682188 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-17 04:23:05.682196 | orchestrator | Tuesday 17 February 2026 04:23:01 +0000 (0:00:03.473) 0:01:00.321 ****** 2026-02-17 04:23:05.682204 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-17 04:23:05.682225 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-17 04:23:05.682233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-17 04:23:05.682260 | orchestrator | 2026-02-17 04:23:05.682273 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-17 04:23:05.682285 | orchestrator | Tuesday 17 February 2026 04:23:03 +0000 (0:00:02.429) 0:01:02.751 ****** 2026-02-17 04:23:05.682301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:05.682318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:05.682331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:05.682362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:05.682382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:05.682398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:05.682407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:05.682415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:05.682423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:05.682431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:05.682445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:08.548501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:08.548607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:08.548624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:08.548715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:08.548738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:08.548751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:08.548789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:08.548823 | orchestrator | 2026-02-17 04:23:08.548837 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-17 04:23:08.548850 | orchestrator | Tuesday 17 February 2026 04:23:06 +0000 (0:00:02.894) 0:01:05.645 ****** 2026-02-17 04:23:08.548863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:08.548876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:08.548888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:08.548900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:08.548930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:09.542306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:09.542436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:09.542481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:09.542497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:09.542520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:09.542536 | orchestrator | 2026-02-17 04:23:09.542551 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-17 04:23:09.542583 | orchestrator | Tuesday 17 February 2026 04:23:09 +0000 (0:00:02.730) 0:01:08.376 ****** 2026-02-17 04:23:10.536063 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:23:10.536176 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:23:10.536192 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:23:10.536205 | orchestrator | 2026-02-17 04:23:10.536217 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-17 04:23:10.536230 | orchestrator | Tuesday 17 February 2026 04:23:09 +0000 (0:00:00.338) 0:01:08.714 ****** 2026-02-17 04:23:10.536246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:10.536263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:23:10.536285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:10.536304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:10.536353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:10.536419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:23:10.536435 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:23:10.536447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:10.536459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:23:10.536471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:10.536482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:10.536502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:10.536527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:23:13.745874 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:23:13.745979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-17 04:23:13.745999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 04:23:13.746012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 04:23:13.746100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 04:23:13.746114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 04:23:13.746140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:23:13.746152 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:23:13.746164 | orchestrator | 2026-02-17 04:23:13.746194 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-17 04:23:13.746207 | orchestrator | Tuesday 17 February 2026 04:23:10 +0000 (0:00:00.761) 0:01:09.476 ****** 2026-02-17 04:23:13.746219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:23:13.746232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:23:13.746244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-17 04:23:13.746261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:13.746286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:15.435679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-17 04:23:15.435797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.435815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.435851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.435869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.435894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.435962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.435988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.436013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.436033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.436134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.436159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.436180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:23:15.436200 | orchestrator | 2026-02-17 04:23:15.436223 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-17 04:23:15.436254 | orchestrator | Tuesday 17 February 2026 04:23:15 +0000 (0:00:04.478) 0:01:13.955 ****** 2026-02-17 04:23:15.436276 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:23:15.436312 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:24:34.407695 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:24:34.407837 | orchestrator | 2026-02-17 04:24:34.407866 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-17 04:24:34.407889 | orchestrator | Tuesday 17 February 2026 04:23:15 +0000 (0:00:00.315) 0:01:14.271 ****** 2026-02-17 04:24:34.407909 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-17 04:24:34.407928 | orchestrator | 2026-02-17 04:24:34.407947 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-17 04:24:34.407965 | orchestrator | Tuesday 17 February 2026 04:23:17 +0000 (0:00:02.053) 0:01:16.324 ****** 2026-02-17 04:24:34.407983 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-17 04:24:34.408002 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-17 04:24:34.408021 | orchestrator | 2026-02-17 04:24:34.408039 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-17 04:24:34.408057 | orchestrator | Tuesday 17 February 2026 04:23:19 +0000 (0:00:02.236) 0:01:18.561 ****** 2026-02-17 04:24:34.408075 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:24:34.408092 | orchestrator | 2026-02-17 04:24:34.408110 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-17 04:24:34.408159 | orchestrator | Tuesday 17 February 2026 04:23:35 +0000 (0:00:15.422) 0:01:33.984 ****** 2026-02-17 04:24:34.408178 | orchestrator | 2026-02-17 04:24:34.408197 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-17 04:24:34.408216 | orchestrator | Tuesday 17 February 2026 04:23:35 +0000 (0:00:00.067) 0:01:34.052 ****** 2026-02-17 04:24:34.408235 | orchestrator | 2026-02-17 04:24:34.408254 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-17 04:24:34.408272 | orchestrator | Tuesday 17 February 2026 04:23:35 +0000 (0:00:00.068) 0:01:34.120 ****** 2026-02-17 04:24:34.408293 | orchestrator | 2026-02-17 04:24:34.408312 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-17 04:24:34.408329 | orchestrator | Tuesday 17 February 2026 04:23:35 +0000 (0:00:00.071) 0:01:34.192 ****** 2026-02-17 04:24:34.408347 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:24:34.408365 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:24:34.408383 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:24:34.408401 | orchestrator | 2026-02-17 04:24:34.408418 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-17 04:24:34.408436 | orchestrator | Tuesday 17 February 2026 04:23:44 +0000 (0:00:08.779) 0:01:42.971 ****** 2026-02-17 04:24:34.408454 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:24:34.408472 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:24:34.408528 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:24:34.408547 | orchestrator | 2026-02-17 04:24:34.408566 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-17 04:24:34.408585 | orchestrator | Tuesday 17 February 2026 04:23:49 +0000 (0:00:05.400) 0:01:48.371 ****** 2026-02-17 04:24:34.408603 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:24:34.408621 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:24:34.408639 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:24:34.408657 | orchestrator | 2026-02-17 04:24:34.408676 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-17 04:24:34.408694 | orchestrator | Tuesday 17 February 2026 04:23:59 +0000 (0:00:10.345) 0:01:58.717 ****** 2026-02-17 04:24:34.408713 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:24:34.408732 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:24:34.408751 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:24:34.408771 | orchestrator | 2026-02-17 04:24:34.408789 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-17 04:24:34.408808 | orchestrator | Tuesday 17 February 2026 04:24:05 +0000 (0:00:05.522) 0:02:04.239 ****** 2026-02-17 04:24:34.408826 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:24:34.408844 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:24:34.408862 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:24:34.408880 | orchestrator | 2026-02-17 04:24:34.408897 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-17 04:24:34.408914 | orchestrator | Tuesday 17 February 2026 04:24:16 +0000 (0:00:10.650) 0:02:14.890 ****** 2026-02-17 04:24:34.408932 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:24:34.408949 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:24:34.408967 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:24:34.408986 | orchestrator | 2026-02-17 04:24:34.409004 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-17 04:24:34.409022 | orchestrator | Tuesday 17 February 2026 04:24:26 +0000 (0:00:10.890) 0:02:25.780 ****** 2026-02-17 04:24:34.409041 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:24:34.409059 | orchestrator | 2026-02-17 04:24:34.409078 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:24:34.409098 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 04:24:34.409118 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 04:24:34.409153 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 04:24:34.409171 | orchestrator | 2026-02-17 04:24:34.409191 | orchestrator | 2026-02-17 04:24:34.409209 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:24:34.409227 | orchestrator | Tuesday 17 February 2026 04:24:34 +0000 (0:00:07.079) 0:02:32.859 ****** 2026-02-17 04:24:34.409246 | orchestrator | =============================================================================== 2026-02-17 04:24:34.409284 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.42s 2026-02-17 04:24:34.409305 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.75s 2026-02-17 04:24:34.409352 | orchestrator | designate : Restart designate-worker container ------------------------- 10.89s 2026-02-17 04:24:34.409374 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.65s 2026-02-17 04:24:34.409391 | orchestrator | designate : Restart designate-central container ------------------------ 10.35s 2026-02-17 04:24:34.409406 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.78s 2026-02-17 04:24:34.409424 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.08s 2026-02-17 04:24:34.409442 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.27s 2026-02-17 04:24:34.409461 | orchestrator | designate : Copying over config.json files for services ----------------- 6.26s 2026-02-17 04:24:34.409507 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.83s 2026-02-17 04:24:34.409525 | orchestrator | designate : Restart designate-producer container ------------------------ 5.52s 2026-02-17 04:24:34.409543 | orchestrator | designate : Restart designate-api container ----------------------------- 5.40s 2026-02-17 04:24:34.409560 | orchestrator | designate : Check designate containers ---------------------------------- 4.48s 2026-02-17 04:24:34.409578 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.92s 2026-02-17 04:24:34.409596 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.70s 2026-02-17 04:24:34.409692 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.47s 2026-02-17 04:24:34.409715 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.33s 2026-02-17 04:24:34.409731 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.26s 2026-02-17 04:24:34.409747 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.08s 2026-02-17 04:24:34.409764 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.89s 2026-02-17 04:24:36.729874 | orchestrator | 2026-02-17 04:24:36 | INFO  | Task 9f81a220-e27d-4eb8-bcef-c50befb7bf88 (octavia) was prepared for execution. 2026-02-17 04:24:36.729963 | orchestrator | 2026-02-17 04:24:36 | INFO  | It takes a moment until task 9f81a220-e27d-4eb8-bcef-c50befb7bf88 (octavia) has been started and output is visible here. 2026-02-17 04:26:40.353597 | orchestrator | 2026-02-17 04:26:40.353780 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:26:40.353799 | orchestrator | 2026-02-17 04:26:40.353811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:26:40.353823 | orchestrator | Tuesday 17 February 2026 04:24:40 +0000 (0:00:00.284) 0:00:00.284 ****** 2026-02-17 04:26:40.353834 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:26:40.353846 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:26:40.353857 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:26:40.353868 | orchestrator | 2026-02-17 04:26:40.353879 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:26:40.353890 | orchestrator | Tuesday 17 February 2026 04:24:41 +0000 (0:00:00.305) 0:00:00.590 ****** 2026-02-17 04:26:40.353901 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-17 04:26:40.353937 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-17 04:26:40.353952 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-17 04:26:40.353964 | orchestrator | 2026-02-17 04:26:40.353978 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-17 04:26:40.353990 | orchestrator | 2026-02-17 04:26:40.354002 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-17 04:26:40.354073 | orchestrator | Tuesday 17 February 2026 04:24:41 +0000 (0:00:00.442) 0:00:01.032 ****** 2026-02-17 04:26:40.354088 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:26:40.354102 | orchestrator | 2026-02-17 04:26:40.354114 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-17 04:26:40.354126 | orchestrator | Tuesday 17 February 2026 04:24:42 +0000 (0:00:00.540) 0:00:01.573 ****** 2026-02-17 04:26:40.354152 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-17 04:26:40.354165 | orchestrator | 2026-02-17 04:26:40.354178 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-17 04:26:40.354190 | orchestrator | Tuesday 17 February 2026 04:24:45 +0000 (0:00:03.366) 0:00:04.939 ****** 2026-02-17 04:26:40.354202 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-17 04:26:40.354215 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-17 04:26:40.354227 | orchestrator | 2026-02-17 04:26:40.354239 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-17 04:26:40.354251 | orchestrator | Tuesday 17 February 2026 04:24:52 +0000 (0:00:06.448) 0:00:11.387 ****** 2026-02-17 04:26:40.354264 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:26:40.354276 | orchestrator | 2026-02-17 04:26:40.354289 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-17 04:26:40.354330 | orchestrator | Tuesday 17 February 2026 04:24:55 +0000 (0:00:03.117) 0:00:14.505 ****** 2026-02-17 04:26:40.354349 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:26:40.354371 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-17 04:26:40.354391 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-17 04:26:40.354403 | orchestrator | 2026-02-17 04:26:40.354428 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-17 04:26:40.354440 | orchestrator | Tuesday 17 February 2026 04:25:03 +0000 (0:00:08.250) 0:00:22.755 ****** 2026-02-17 04:26:40.354458 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:26:40.354480 | orchestrator | 2026-02-17 04:26:40.354506 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-17 04:26:40.354524 | orchestrator | Tuesday 17 February 2026 04:25:06 +0000 (0:00:03.125) 0:00:25.880 ****** 2026-02-17 04:26:40.354541 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-17 04:26:40.354556 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-17 04:26:40.354573 | orchestrator | 2026-02-17 04:26:40.354590 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-17 04:26:40.354608 | orchestrator | Tuesday 17 February 2026 04:25:13 +0000 (0:00:07.132) 0:00:33.012 ****** 2026-02-17 04:26:40.354625 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-17 04:26:40.354644 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-17 04:26:40.354657 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-17 04:26:40.354667 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-17 04:26:40.354678 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-17 04:26:40.354689 | orchestrator | 2026-02-17 04:26:40.354700 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-17 04:26:40.354723 | orchestrator | Tuesday 17 February 2026 04:25:28 +0000 (0:00:15.335) 0:00:48.348 ****** 2026-02-17 04:26:40.354734 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:26:40.354745 | orchestrator | 2026-02-17 04:26:40.354756 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-17 04:26:40.354766 | orchestrator | Tuesday 17 February 2026 04:25:29 +0000 (0:00:00.736) 0:00:49.084 ****** 2026-02-17 04:26:40.354777 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.354788 | orchestrator | 2026-02-17 04:26:40.354799 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-17 04:26:40.354809 | orchestrator | Tuesday 17 February 2026 04:25:34 +0000 (0:00:04.662) 0:00:53.747 ****** 2026-02-17 04:26:40.354820 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.354831 | orchestrator | 2026-02-17 04:26:40.354842 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-17 04:26:40.354873 | orchestrator | Tuesday 17 February 2026 04:25:38 +0000 (0:00:04.114) 0:00:57.861 ****** 2026-02-17 04:26:40.354885 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:26:40.354901 | orchestrator | 2026-02-17 04:26:40.354920 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-17 04:26:40.354937 | orchestrator | Tuesday 17 February 2026 04:25:41 +0000 (0:00:03.030) 0:01:00.891 ****** 2026-02-17 04:26:40.354955 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-17 04:26:40.354975 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-17 04:26:40.354994 | orchestrator | 2026-02-17 04:26:40.355014 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-17 04:26:40.355030 | orchestrator | Tuesday 17 February 2026 04:25:51 +0000 (0:00:09.741) 0:01:10.633 ****** 2026-02-17 04:26:40.355042 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-17 04:26:40.355053 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-17 04:26:40.355065 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-17 04:26:40.355077 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-17 04:26:40.355092 | orchestrator | 2026-02-17 04:26:40.355103 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-17 04:26:40.355118 | orchestrator | Tuesday 17 February 2026 04:26:06 +0000 (0:00:15.662) 0:01:26.295 ****** 2026-02-17 04:26:40.355137 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.355154 | orchestrator | 2026-02-17 04:26:40.355171 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-17 04:26:40.355189 | orchestrator | Tuesday 17 February 2026 04:26:11 +0000 (0:00:04.472) 0:01:30.768 ****** 2026-02-17 04:26:40.355208 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.355226 | orchestrator | 2026-02-17 04:26:40.355246 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-17 04:26:40.355261 | orchestrator | Tuesday 17 February 2026 04:26:16 +0000 (0:00:05.521) 0:01:36.290 ****** 2026-02-17 04:26:40.355277 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:26:40.355346 | orchestrator | 2026-02-17 04:26:40.355368 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-17 04:26:40.355387 | orchestrator | Tuesday 17 February 2026 04:26:17 +0000 (0:00:00.221) 0:01:36.512 ****** 2026-02-17 04:26:40.355406 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:26:40.355424 | orchestrator | 2026-02-17 04:26:40.355442 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-17 04:26:40.355453 | orchestrator | Tuesday 17 February 2026 04:26:21 +0000 (0:00:04.575) 0:01:41.088 ****** 2026-02-17 04:26:40.355475 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:26:40.355486 | orchestrator | 2026-02-17 04:26:40.355506 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-17 04:26:40.355525 | orchestrator | Tuesday 17 February 2026 04:26:22 +0000 (0:00:01.096) 0:01:42.184 ****** 2026-02-17 04:26:40.355540 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.355558 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:26:40.355584 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:26:40.355602 | orchestrator | 2026-02-17 04:26:40.355621 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-17 04:26:40.355639 | orchestrator | Tuesday 17 February 2026 04:26:28 +0000 (0:00:05.277) 0:01:47.462 ****** 2026-02-17 04:26:40.355658 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.355677 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:26:40.355694 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:26:40.355711 | orchestrator | 2026-02-17 04:26:40.355728 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-17 04:26:40.355746 | orchestrator | Tuesday 17 February 2026 04:26:32 +0000 (0:00:04.532) 0:01:51.994 ****** 2026-02-17 04:26:40.355761 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.355777 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:26:40.355795 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:26:40.355812 | orchestrator | 2026-02-17 04:26:40.355829 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-17 04:26:40.355846 | orchestrator | Tuesday 17 February 2026 04:26:33 +0000 (0:00:01.049) 0:01:53.043 ****** 2026-02-17 04:26:40.355864 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:26:40.355881 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:26:40.355900 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:26:40.355919 | orchestrator | 2026-02-17 04:26:40.355937 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-17 04:26:40.355953 | orchestrator | Tuesday 17 February 2026 04:26:35 +0000 (0:00:01.749) 0:01:54.793 ****** 2026-02-17 04:26:40.355963 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:26:40.355974 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:26:40.355985 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.355996 | orchestrator | 2026-02-17 04:26:40.356006 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-17 04:26:40.356018 | orchestrator | Tuesday 17 February 2026 04:26:36 +0000 (0:00:01.299) 0:01:56.092 ****** 2026-02-17 04:26:40.356028 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.356039 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:26:40.356049 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:26:40.356060 | orchestrator | 2026-02-17 04:26:40.356071 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-17 04:26:40.356082 | orchestrator | Tuesday 17 February 2026 04:26:37 +0000 (0:00:01.292) 0:01:57.385 ****** 2026-02-17 04:26:40.356092 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:26:40.356104 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:26:40.356114 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:26:40.356125 | orchestrator | 2026-02-17 04:26:40.356150 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-17 04:27:04.987670 | orchestrator | Tuesday 17 February 2026 04:26:40 +0000 (0:00:02.338) 0:01:59.724 ****** 2026-02-17 04:27:04.987788 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:27:04.987804 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:27:04.987816 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:27:04.987827 | orchestrator | 2026-02-17 04:27:04.987839 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-17 04:27:04.987850 | orchestrator | Tuesday 17 February 2026 04:26:41 +0000 (0:00:01.548) 0:02:01.272 ****** 2026-02-17 04:27:04.987861 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:27:04.987873 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:27:04.987906 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:27:04.987918 | orchestrator | 2026-02-17 04:27:04.987929 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-17 04:27:04.987940 | orchestrator | Tuesday 17 February 2026 04:26:42 +0000 (0:00:00.669) 0:02:01.942 ****** 2026-02-17 04:27:04.987951 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:27:04.987961 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:27:04.987972 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:27:04.987983 | orchestrator | 2026-02-17 04:27:04.987994 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-17 04:27:04.988005 | orchestrator | Tuesday 17 February 2026 04:26:46 +0000 (0:00:03.819) 0:02:05.762 ****** 2026-02-17 04:27:04.988016 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:27:04.988027 | orchestrator | 2026-02-17 04:27:04.988038 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-17 04:27:04.988049 | orchestrator | Tuesday 17 February 2026 04:26:46 +0000 (0:00:00.547) 0:02:06.309 ****** 2026-02-17 04:27:04.988060 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:27:04.988070 | orchestrator | 2026-02-17 04:27:04.988081 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-17 04:27:04.988092 | orchestrator | Tuesday 17 February 2026 04:26:50 +0000 (0:00:03.082) 0:02:09.392 ****** 2026-02-17 04:27:04.988103 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:27:04.988114 | orchestrator | 2026-02-17 04:27:04.988125 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-17 04:27:04.988135 | orchestrator | Tuesday 17 February 2026 04:26:52 +0000 (0:00:02.847) 0:02:12.240 ****** 2026-02-17 04:27:04.988146 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-17 04:27:04.988159 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-17 04:27:04.988170 | orchestrator | 2026-02-17 04:27:04.988181 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-17 04:27:04.988192 | orchestrator | Tuesday 17 February 2026 04:26:59 +0000 (0:00:06.407) 0:02:18.647 ****** 2026-02-17 04:27:04.988203 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:27:04.988216 | orchestrator | 2026-02-17 04:27:04.988228 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-17 04:27:04.988241 | orchestrator | Tuesday 17 February 2026 04:27:02 +0000 (0:00:03.252) 0:02:21.900 ****** 2026-02-17 04:27:04.988253 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:27:04.988292 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:27:04.988306 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:27:04.988318 | orchestrator | 2026-02-17 04:27:04.988345 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-17 04:27:04.988359 | orchestrator | Tuesday 17 February 2026 04:27:03 +0000 (0:00:00.493) 0:02:22.393 ****** 2026-02-17 04:27:04.988375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:04.988410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:04.988433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:04.988446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:04.988459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:04.988476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:04.988488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:04.988507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:04.988527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:06.404410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:06.404517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:06.404550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:06.404564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:06.404577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:06.404609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:06.404622 | orchestrator | 2026-02-17 04:27:06.404635 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-17 04:27:06.404647 | orchestrator | Tuesday 17 February 2026 04:27:05 +0000 (0:00:02.400) 0:02:24.793 ****** 2026-02-17 04:27:06.404658 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:27:06.404670 | orchestrator | 2026-02-17 04:27:06.404681 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-17 04:27:06.404692 | orchestrator | Tuesday 17 February 2026 04:27:05 +0000 (0:00:00.137) 0:02:24.931 ****** 2026-02-17 04:27:06.404703 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:27:06.404733 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:27:06.404746 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:27:06.404757 | orchestrator | 2026-02-17 04:27:06.404768 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-17 04:27:06.404779 | orchestrator | Tuesday 17 February 2026 04:27:05 +0000 (0:00:00.293) 0:02:25.225 ****** 2026-02-17 04:27:06.404792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:06.404805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:06.404824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:06.404844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:06.404856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:06.404870 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:27:06.404893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:11.126920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:11.127034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:11.127072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:11.127121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:11.127143 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:27:11.127166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:11.127222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:11.127315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:11.127341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:11.127372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:11.127396 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:27:11.127408 | orchestrator | 2026-02-17 04:27:11.127423 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-17 04:27:11.127437 | orchestrator | Tuesday 17 February 2026 04:27:06 +0000 (0:00:00.664) 0:02:25.890 ****** 2026-02-17 04:27:11.127451 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:27:11.127463 | orchestrator | 2026-02-17 04:27:11.127476 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-17 04:27:11.127488 | orchestrator | Tuesday 17 February 2026 04:27:07 +0000 (0:00:00.699) 0:02:26.589 ****** 2026-02-17 04:27:11.127502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:11.127517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:11.127542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:12.600902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:12.601030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:12.601047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:12.601061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:12.601205 | orchestrator | 2026-02-17 04:27:12.601219 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-17 04:27:12.601232 | orchestrator | Tuesday 17 February 2026 04:27:12 +0000 (0:00:04.842) 0:02:31.432 ****** 2026-02-17 04:27:12.601298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:12.702569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:12.702679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:12.702707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:12.702730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:12.702754 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:27:12.702777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:12.702792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:12.702862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:12.702889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:12.702909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:12.702928 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:27:12.702949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:12.702967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:12.702986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:12.703037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:13.479370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:13.479470 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:27:13.479487 | orchestrator | 2026-02-17 04:27:13.479499 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-17 04:27:13.479512 | orchestrator | Tuesday 17 February 2026 04:27:12 +0000 (0:00:00.645) 0:02:32.077 ****** 2026-02-17 04:27:13.479525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:13.479547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:13.479569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:13.479628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:13.479689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:13.479712 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:27:13.479725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:13.479737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:13.479748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:13.479759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:13.479779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:13.479791 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:27:13.479815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 04:27:17.909625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 04:27:17.909774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 04:27:17.909794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 04:27:17.909808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 04:27:17.909846 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:27:17.909861 | orchestrator | 2026-02-17 04:27:17.909873 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-17 04:27:17.909885 | orchestrator | Tuesday 17 February 2026 04:27:13 +0000 (0:00:01.237) 0:02:33.315 ****** 2026-02-17 04:27:17.909898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:17.909946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:17.909960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:17.909971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:17.909992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:17.910004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:17.910074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:17.910105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:33.388864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:33.388977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:33.388994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:33.389032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:33.389045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:33.389072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:33.389102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:33.389116 | orchestrator | 2026-02-17 04:27:33.389129 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-17 04:27:33.389142 | orchestrator | Tuesday 17 February 2026 04:27:18 +0000 (0:00:04.922) 0:02:38.237 ****** 2026-02-17 04:27:33.389154 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-17 04:27:33.389166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-17 04:27:33.389177 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-17 04:27:33.389188 | orchestrator | 2026-02-17 04:27:33.389199 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-17 04:27:33.389210 | orchestrator | Tuesday 17 February 2026 04:27:20 +0000 (0:00:01.655) 0:02:39.893 ****** 2026-02-17 04:27:33.389223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:33.389288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:33.389302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:33.389328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:48.246353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:48.246472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:48.246509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:27:48.246646 | orchestrator | 2026-02-17 04:27:48.246658 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-17 04:27:48.246671 | orchestrator | Tuesday 17 February 2026 04:27:36 +0000 (0:00:15.972) 0:02:55.866 ****** 2026-02-17 04:27:48.246681 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:27:48.246692 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:27:48.246702 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:27:48.246712 | orchestrator | 2026-02-17 04:27:48.246722 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-17 04:27:48.246732 | orchestrator | Tuesday 17 February 2026 04:27:38 +0000 (0:00:01.699) 0:02:57.565 ****** 2026-02-17 04:27:48.246742 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-17 04:27:48.246752 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-17 04:27:48.246762 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-17 04:27:48.246772 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-17 04:27:48.246782 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-17 04:27:48.246792 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-17 04:27:48.246802 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-17 04:27:48.246812 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-17 04:27:48.246821 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-17 04:27:48.246835 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-17 04:27:48.246847 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-17 04:27:48.246858 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-17 04:27:48.246869 | orchestrator | 2026-02-17 04:27:48.246880 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-17 04:27:48.246899 | orchestrator | Tuesday 17 February 2026 04:27:43 +0000 (0:00:04.952) 0:03:02.518 ****** 2026-02-17 04:27:48.246911 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-17 04:27:48.246922 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-17 04:27:48.246940 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-17 04:27:56.471472 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-17 04:27:56.471578 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-17 04:27:56.471592 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-17 04:27:56.471604 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-17 04:27:56.471615 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-17 04:27:56.471626 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-17 04:27:56.471638 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-17 04:27:56.471649 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-17 04:27:56.471660 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-17 04:27:56.471671 | orchestrator | 2026-02-17 04:27:56.471684 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-17 04:27:56.471696 | orchestrator | Tuesday 17 February 2026 04:27:48 +0000 (0:00:05.100) 0:03:07.618 ****** 2026-02-17 04:27:56.471707 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-17 04:27:56.471718 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-17 04:27:56.471729 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-17 04:27:56.471740 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-17 04:27:56.471752 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-17 04:27:56.471763 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-17 04:27:56.471773 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-17 04:27:56.471784 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-17 04:27:56.471795 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-17 04:27:56.471806 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-17 04:27:56.471817 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-17 04:27:56.471828 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-17 04:27:56.471839 | orchestrator | 2026-02-17 04:27:56.471850 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-17 04:27:56.471862 | orchestrator | Tuesday 17 February 2026 04:27:53 +0000 (0:00:05.172) 0:03:12.791 ****** 2026-02-17 04:27:56.471877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:56.471910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:56.471973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 04:27:56.471988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:56.472002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:56.472016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-17 04:27:56.472030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:56.472045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:56.472071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-17 04:27:56.472092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:29:21.063261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:29:21.063381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-17 04:29:21.063400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:21.063414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:21.063450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:21.063464 | orchestrator | 2026-02-17 04:29:21.063491 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-17 04:29:21.063504 | orchestrator | Tuesday 17 February 2026 04:27:57 +0000 (0:00:03.812) 0:03:16.604 ****** 2026-02-17 04:29:21.063516 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:21.063529 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:29:21.063540 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:29:21.063551 | orchestrator | 2026-02-17 04:29:21.063562 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-17 04:29:21.063573 | orchestrator | Tuesday 17 February 2026 04:27:57 +0000 (0:00:00.522) 0:03:17.127 ****** 2026-02-17 04:29:21.063585 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.063596 | orchestrator | 2026-02-17 04:29:21.063607 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-17 04:29:21.063618 | orchestrator | Tuesday 17 February 2026 04:27:59 +0000 (0:00:02.164) 0:03:19.292 ****** 2026-02-17 04:29:21.063629 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.063640 | orchestrator | 2026-02-17 04:29:21.063650 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-17 04:29:21.063661 | orchestrator | Tuesday 17 February 2026 04:28:01 +0000 (0:00:02.032) 0:03:21.324 ****** 2026-02-17 04:29:21.063673 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.063684 | orchestrator | 2026-02-17 04:29:21.063695 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-17 04:29:21.063707 | orchestrator | Tuesday 17 February 2026 04:28:04 +0000 (0:00:02.172) 0:03:23.497 ****** 2026-02-17 04:29:21.063736 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.063749 | orchestrator | 2026-02-17 04:29:21.063763 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-17 04:29:21.063776 | orchestrator | Tuesday 17 February 2026 04:28:06 +0000 (0:00:02.082) 0:03:25.580 ****** 2026-02-17 04:29:21.063789 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.063801 | orchestrator | 2026-02-17 04:29:21.063814 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-17 04:29:21.063826 | orchestrator | Tuesday 17 February 2026 04:28:28 +0000 (0:00:22.302) 0:03:47.882 ****** 2026-02-17 04:29:21.063839 | orchestrator | 2026-02-17 04:29:21.063852 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-17 04:29:21.063865 | orchestrator | Tuesday 17 February 2026 04:28:28 +0000 (0:00:00.066) 0:03:47.948 ****** 2026-02-17 04:29:21.063877 | orchestrator | 2026-02-17 04:29:21.063890 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-17 04:29:21.063902 | orchestrator | Tuesday 17 February 2026 04:28:28 +0000 (0:00:00.066) 0:03:48.014 ****** 2026-02-17 04:29:21.063914 | orchestrator | 2026-02-17 04:29:21.063927 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-17 04:29:21.063939 | orchestrator | Tuesday 17 February 2026 04:28:28 +0000 (0:00:00.065) 0:03:48.080 ****** 2026-02-17 04:29:21.063960 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.063973 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:29:21.063987 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:29:21.063999 | orchestrator | 2026-02-17 04:29:21.064012 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-17 04:29:21.064024 | orchestrator | Tuesday 17 February 2026 04:28:45 +0000 (0:00:16.615) 0:04:04.695 ****** 2026-02-17 04:29:21.064037 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.064050 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:29:21.064062 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:29:21.064075 | orchestrator | 2026-02-17 04:29:21.064088 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-17 04:29:21.064100 | orchestrator | Tuesday 17 February 2026 04:28:56 +0000 (0:00:11.380) 0:04:16.075 ****** 2026-02-17 04:29:21.064111 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.064122 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:29:21.064133 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:29:21.064144 | orchestrator | 2026-02-17 04:29:21.064155 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-17 04:29:21.064188 | orchestrator | Tuesday 17 February 2026 04:29:01 +0000 (0:00:05.258) 0:04:21.334 ****** 2026-02-17 04:29:21.064199 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:29:21.064211 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:29:21.064222 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.064232 | orchestrator | 2026-02-17 04:29:21.064243 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-17 04:29:21.064255 | orchestrator | Tuesday 17 February 2026 04:29:10 +0000 (0:00:08.311) 0:04:29.645 ****** 2026-02-17 04:29:21.064266 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:29:21.064277 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:29:21.064288 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:29:21.064299 | orchestrator | 2026-02-17 04:29:21.064310 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:29:21.064322 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 04:29:21.064334 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 04:29:21.064346 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 04:29:21.064357 | orchestrator | 2026-02-17 04:29:21.064368 | orchestrator | 2026-02-17 04:29:21.064379 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:29:21.064391 | orchestrator | Tuesday 17 February 2026 04:29:21 +0000 (0:00:10.767) 0:04:40.413 ****** 2026-02-17 04:29:21.064402 | orchestrator | =============================================================================== 2026-02-17 04:29:21.064413 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.30s 2026-02-17 04:29:21.064424 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.62s 2026-02-17 04:29:21.064440 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.97s 2026-02-17 04:29:21.064451 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.66s 2026-02-17 04:29:21.064463 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.34s 2026-02-17 04:29:21.064473 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.38s 2026-02-17 04:29:21.064485 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.77s 2026-02-17 04:29:21.064495 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.74s 2026-02-17 04:29:21.064507 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.31s 2026-02-17 04:29:21.064518 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.25s 2026-02-17 04:29:21.064535 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.13s 2026-02-17 04:29:21.064546 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.45s 2026-02-17 04:29:21.064557 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.41s 2026-02-17 04:29:21.064568 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.52s 2026-02-17 04:29:21.064586 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.28s 2026-02-17 04:29:21.378086 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.26s 2026-02-17 04:29:21.378231 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.17s 2026-02-17 04:29:21.378248 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.10s 2026-02-17 04:29:21.378260 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 4.95s 2026-02-17 04:29:21.378272 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.92s 2026-02-17 04:29:23.699276 | orchestrator | 2026-02-17 04:29:23 | INFO  | Task d0b7cf83-9479-4d62-ada9-6a2ea0384004 (ceilometer) was prepared for execution. 2026-02-17 04:29:23.699393 | orchestrator | 2026-02-17 04:29:23 | INFO  | It takes a moment until task d0b7cf83-9479-4d62-ada9-6a2ea0384004 (ceilometer) has been started and output is visible here. 2026-02-17 04:29:46.059246 | orchestrator | 2026-02-17 04:29:46.059364 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:29:46.059382 | orchestrator | 2026-02-17 04:29:46.059394 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:29:46.059422 | orchestrator | Tuesday 17 February 2026 04:29:27 +0000 (0:00:00.259) 0:00:00.259 ****** 2026-02-17 04:29:46.059433 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:29:46.059457 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:29:46.059468 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:29:46.059479 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:29:46.059489 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:29:46.059501 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:29:46.059512 | orchestrator | 2026-02-17 04:29:46.059523 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:29:46.059534 | orchestrator | Tuesday 17 February 2026 04:29:28 +0000 (0:00:00.769) 0:00:01.028 ****** 2026-02-17 04:29:46.059545 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-17 04:29:46.059557 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-17 04:29:46.059568 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-17 04:29:46.059578 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-17 04:29:46.059589 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-17 04:29:46.059600 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-17 04:29:46.059611 | orchestrator | 2026-02-17 04:29:46.059622 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-17 04:29:46.059633 | orchestrator | 2026-02-17 04:29:46.059643 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-17 04:29:46.059654 | orchestrator | Tuesday 17 February 2026 04:29:29 +0000 (0:00:00.606) 0:00:01.635 ****** 2026-02-17 04:29:46.059666 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:29:46.059679 | orchestrator | 2026-02-17 04:29:46.059690 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-17 04:29:46.059700 | orchestrator | Tuesday 17 February 2026 04:29:30 +0000 (0:00:01.204) 0:00:02.839 ****** 2026-02-17 04:29:46.059712 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:46.059723 | orchestrator | 2026-02-17 04:29:46.059734 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-17 04:29:46.059773 | orchestrator | Tuesday 17 February 2026 04:29:30 +0000 (0:00:00.123) 0:00:02.963 ****** 2026-02-17 04:29:46.059787 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:46.059799 | orchestrator | 2026-02-17 04:29:46.059812 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-17 04:29:46.059824 | orchestrator | Tuesday 17 February 2026 04:29:30 +0000 (0:00:00.138) 0:00:03.101 ****** 2026-02-17 04:29:46.059837 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:29:46.059849 | orchestrator | 2026-02-17 04:29:46.059861 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-17 04:29:46.059874 | orchestrator | Tuesday 17 February 2026 04:29:34 +0000 (0:00:03.456) 0:00:06.557 ****** 2026-02-17 04:29:46.059886 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:29:46.059898 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-17 04:29:46.059911 | orchestrator | 2026-02-17 04:29:46.059938 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-17 04:29:46.059951 | orchestrator | Tuesday 17 February 2026 04:29:37 +0000 (0:00:03.347) 0:00:09.905 ****** 2026-02-17 04:29:46.059963 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:29:46.059975 | orchestrator | 2026-02-17 04:29:46.059988 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-17 04:29:46.060000 | orchestrator | Tuesday 17 February 2026 04:29:40 +0000 (0:00:03.134) 0:00:13.039 ****** 2026-02-17 04:29:46.060013 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-17 04:29:46.060025 | orchestrator | 2026-02-17 04:29:46.060037 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-17 04:29:46.060049 | orchestrator | Tuesday 17 February 2026 04:29:44 +0000 (0:00:03.882) 0:00:16.922 ****** 2026-02-17 04:29:46.060062 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:46.060074 | orchestrator | 2026-02-17 04:29:46.060087 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-17 04:29:46.060099 | orchestrator | Tuesday 17 February 2026 04:29:44 +0000 (0:00:00.135) 0:00:17.058 ****** 2026-02-17 04:29:46.060113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:46.060170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:46.060186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:46.060207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:46.060228 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:46.060241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:29:46.060254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:29:46.060274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:29:50.662386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:29:50.662512 | orchestrator | 2026-02-17 04:29:50.662528 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-17 04:29:50.662540 | orchestrator | Tuesday 17 February 2026 04:29:46 +0000 (0:00:01.427) 0:00:18.485 ****** 2026-02-17 04:29:50.662550 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 04:29:50.662561 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:29:50.662571 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 04:29:50.662581 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 04:29:50.662591 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:29:50.662601 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 04:29:50.662610 | orchestrator | 2026-02-17 04:29:50.662620 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-17 04:29:50.662632 | orchestrator | Tuesday 17 February 2026 04:29:47 +0000 (0:00:01.557) 0:00:20.043 ****** 2026-02-17 04:29:50.662642 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:29:50.662668 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:29:50.662678 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:29:50.662688 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:29:50.662697 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:29:50.662707 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:29:50.662716 | orchestrator | 2026-02-17 04:29:50.662726 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-17 04:29:50.662736 | orchestrator | Tuesday 17 February 2026 04:29:48 +0000 (0:00:00.618) 0:00:20.661 ****** 2026-02-17 04:29:50.662747 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:50.662757 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:29:50.662766 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:29:50.662776 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:29:50.662786 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:29:50.662795 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:29:50.662804 | orchestrator | 2026-02-17 04:29:50.662814 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-17 04:29:50.662825 | orchestrator | Tuesday 17 February 2026 04:29:48 +0000 (0:00:00.762) 0:00:21.424 ****** 2026-02-17 04:29:50.662835 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:29:50.662845 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:29:50.662854 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:29:50.662864 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:29:50.662873 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:29:50.662882 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:29:50.662892 | orchestrator | 2026-02-17 04:29:50.662937 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-17 04:29:50.662951 | orchestrator | Tuesday 17 February 2026 04:29:49 +0000 (0:00:00.636) 0:00:22.060 ****** 2026-02-17 04:29:50.662969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:50.662988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:50.663017 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:50.663058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:50.663077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:50.663094 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:29:50.663110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:50.663136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:50.663188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:50.663207 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:29:50.663222 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:29:50.663240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:50.663269 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:29:50.663298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:55.162094 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:29:55.162266 | orchestrator | 2026-02-17 04:29:55.162297 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-17 04:29:55.162311 | orchestrator | Tuesday 17 February 2026 04:29:50 +0000 (0:00:01.030) 0:00:23.091 ****** 2026-02-17 04:29:55.162326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:55.162343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:55.162356 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:55.162384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:55.162397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:55.162434 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:29:55.162447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:55.162458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:55.162470 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:29:55.162500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:55.162515 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:29:55.162535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:55.162554 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:29:55.162581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:55.162616 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:29:55.162635 | orchestrator | 2026-02-17 04:29:55.162650 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-17 04:29:55.162665 | orchestrator | Tuesday 17 February 2026 04:29:51 +0000 (0:00:00.807) 0:00:23.899 ****** 2026-02-17 04:29:55.162678 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:29:55.162690 | orchestrator | 2026-02-17 04:29:55.162702 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-17 04:29:55.162715 | orchestrator | Tuesday 17 February 2026 04:29:52 +0000 (0:00:00.661) 0:00:24.561 ****** 2026-02-17 04:29:55.162728 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:29:55.162741 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:29:55.162753 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:29:55.162765 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:29:55.162777 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:29:55.162788 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:29:55.162801 | orchestrator | 2026-02-17 04:29:55.162813 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-17 04:29:55.162826 | orchestrator | Tuesday 17 February 2026 04:29:52 +0000 (0:00:00.770) 0:00:25.331 ****** 2026-02-17 04:29:55.162838 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:29:55.162850 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:29:55.162862 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:29:55.162874 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:29:55.162887 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:29:55.162904 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:29:55.162923 | orchestrator | 2026-02-17 04:29:55.162940 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-17 04:29:55.162958 | orchestrator | Tuesday 17 February 2026 04:29:53 +0000 (0:00:00.919) 0:00:26.250 ****** 2026-02-17 04:29:55.162977 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:55.162995 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:29:55.163013 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:29:55.163031 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:29:55.163050 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:29:55.163069 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:29:55.163088 | orchestrator | 2026-02-17 04:29:55.163108 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-17 04:29:55.163127 | orchestrator | Tuesday 17 February 2026 04:29:54 +0000 (0:00:00.758) 0:00:27.009 ****** 2026-02-17 04:29:55.163171 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:55.163184 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:29:55.163195 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:29:55.163206 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:29:55.163217 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:29:55.163228 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:29:55.163238 | orchestrator | 2026-02-17 04:29:59.960350 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-17 04:29:59.960450 | orchestrator | Tuesday 17 February 2026 04:29:55 +0000 (0:00:00.589) 0:00:27.599 ****** 2026-02-17 04:29:59.960466 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:29:59.960480 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 04:29:59.960492 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 04:29:59.960503 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:29:59.960514 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 04:29:59.960524 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 04:29:59.960535 | orchestrator | 2026-02-17 04:29:59.960548 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-17 04:29:59.960586 | orchestrator | Tuesday 17 February 2026 04:29:56 +0000 (0:00:01.448) 0:00:29.048 ****** 2026-02-17 04:29:59.960601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:59.960633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:59.960646 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:59.960658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:59.960670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:59.960681 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:29:59.960693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:29:59.960723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:29:59.960775 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:29:59.960788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:59.960800 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:29:59.960816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:59.960828 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:29:59.960839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:29:59.960850 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:29:59.960861 | orchestrator | 2026-02-17 04:29:59.960875 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-17 04:29:59.960887 | orchestrator | Tuesday 17 February 2026 04:29:57 +0000 (0:00:00.831) 0:00:29.879 ****** 2026-02-17 04:29:59.960899 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:29:59.960913 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:29:59.960925 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:29:59.960937 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:29:59.960949 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:29:59.960961 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:29:59.960974 | orchestrator | 2026-02-17 04:29:59.960986 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-17 04:29:59.960999 | orchestrator | Tuesday 17 February 2026 04:29:58 +0000 (0:00:00.780) 0:00:30.660 ****** 2026-02-17 04:29:59.961012 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:29:59.961024 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 04:29:59.961035 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 04:29:59.961047 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:29:59.961059 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 04:29:59.961071 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 04:29:59.961084 | orchestrator | 2026-02-17 04:29:59.961097 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-17 04:29:59.961116 | orchestrator | Tuesday 17 February 2026 04:29:59 +0000 (0:00:01.316) 0:00:31.976 ****** 2026-02-17 04:29:59.961166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:05.704983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:05.705112 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:05.705133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:05.705212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:05.705225 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:05.705237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:05.705249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:05.705289 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:05.705313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:05.705333 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:05.705367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:05.705380 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:05.705391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:05.705403 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:05.705415 | orchestrator | 2026-02-17 04:30:05.705433 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-17 04:30:05.705446 | orchestrator | Tuesday 17 February 2026 04:30:00 +0000 (0:00:01.029) 0:00:33.006 ****** 2026-02-17 04:30:05.705458 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:05.705469 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:05.705480 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:05.705491 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:05.705503 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:05.705516 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:05.705529 | orchestrator | 2026-02-17 04:30:05.705542 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-17 04:30:05.705555 | orchestrator | Tuesday 17 February 2026 04:30:01 +0000 (0:00:00.802) 0:00:33.809 ****** 2026-02-17 04:30:05.705567 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:05.705580 | orchestrator | 2026-02-17 04:30:05.705594 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-17 04:30:05.705607 | orchestrator | Tuesday 17 February 2026 04:30:01 +0000 (0:00:00.144) 0:00:33.953 ****** 2026-02-17 04:30:05.705620 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:05.705632 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:05.705645 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:05.705658 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:05.705678 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:05.705690 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:05.705701 | orchestrator | 2026-02-17 04:30:05.705712 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-17 04:30:05.705723 | orchestrator | Tuesday 17 February 2026 04:30:02 +0000 (0:00:00.602) 0:00:34.556 ****** 2026-02-17 04:30:05.705736 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:30:05.705748 | orchestrator | 2026-02-17 04:30:05.705760 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-17 04:30:05.705771 | orchestrator | Tuesday 17 February 2026 04:30:03 +0000 (0:00:01.280) 0:00:35.837 ****** 2026-02-17 04:30:05.705782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:05.705803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:06.164928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:06.165055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:06.165085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:06.165132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:06.165189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:06.165202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:06.165231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:06.165244 | orchestrator | 2026-02-17 04:30:06.165257 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-17 04:30:06.165269 | orchestrator | Tuesday 17 February 2026 04:30:05 +0000 (0:00:02.301) 0:00:38.138 ****** 2026-02-17 04:30:06.165281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:06.165300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:06.165322 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:06.165335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:06.165346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:06.165358 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:06.165369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:06.165388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:07.823923 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:07.824010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:07.824029 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:07.824074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:07.824087 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:07.824098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:07.824110 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:07.824121 | orchestrator | 2026-02-17 04:30:07.824171 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-17 04:30:07.824185 | orchestrator | Tuesday 17 February 2026 04:30:06 +0000 (0:00:00.750) 0:00:38.889 ****** 2026-02-17 04:30:07.824197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:07.824210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:07.824222 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:07.824251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:07.824269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:07.824289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:07.824301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:07.824313 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:07.824324 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:07.824335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:07.824347 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:07.824358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:07.824369 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:07.824390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:14.568689 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:14.568810 | orchestrator | 2026-02-17 04:30:14.568827 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-17 04:30:14.568841 | orchestrator | Tuesday 17 February 2026 04:30:07 +0000 (0:00:01.366) 0:00:40.255 ****** 2026-02-17 04:30:14.568870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:14.568886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:14.568898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:14.568911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:14.568924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:14.568973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:14.568992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:14.569005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:14.569017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:14.569028 | orchestrator | 2026-02-17 04:30:14.569040 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-17 04:30:14.569051 | orchestrator | Tuesday 17 February 2026 04:30:10 +0000 (0:00:02.347) 0:00:42.602 ****** 2026-02-17 04:30:14.569063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:14.569074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:14.569100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:23.701591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:23.701705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:23.701721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:23.701733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:23.701746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:23.701778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:23.701790 | orchestrator | 2026-02-17 04:30:23.701802 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-17 04:30:23.701814 | orchestrator | Tuesday 17 February 2026 04:30:14 +0000 (0:00:04.398) 0:00:47.000 ****** 2026-02-17 04:30:23.701840 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:30:23.701852 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 04:30:23.701862 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 04:30:23.701872 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:30:23.701881 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 04:30:23.701891 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 04:30:23.701901 | orchestrator | 2026-02-17 04:30:23.701917 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-17 04:30:23.701927 | orchestrator | Tuesday 17 February 2026 04:30:15 +0000 (0:00:01.394) 0:00:48.395 ****** 2026-02-17 04:30:23.701937 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:23.701946 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:23.701956 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:23.701966 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:23.701975 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:23.701985 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:23.701995 | orchestrator | 2026-02-17 04:30:23.702004 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-17 04:30:23.702064 | orchestrator | Tuesday 17 February 2026 04:30:16 +0000 (0:00:00.588) 0:00:48.984 ****** 2026-02-17 04:30:23.702077 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:23.702089 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:23.702100 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:23.702111 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:30:23.702161 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:30:23.702174 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:30:23.702185 | orchestrator | 2026-02-17 04:30:23.702196 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-17 04:30:23.702206 | orchestrator | Tuesday 17 February 2026 04:30:18 +0000 (0:00:01.695) 0:00:50.679 ****** 2026-02-17 04:30:23.702216 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:23.702225 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:23.702235 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:23.702245 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:30:23.702254 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:30:23.702264 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:30:23.702274 | orchestrator | 2026-02-17 04:30:23.702283 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-17 04:30:23.702293 | orchestrator | Tuesday 17 February 2026 04:30:19 +0000 (0:00:01.486) 0:00:52.166 ****** 2026-02-17 04:30:23.702303 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:30:23.702313 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 04:30:23.702322 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 04:30:23.702341 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:30:23.702350 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 04:30:23.702360 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 04:30:23.702370 | orchestrator | 2026-02-17 04:30:23.702380 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-17 04:30:23.702390 | orchestrator | Tuesday 17 February 2026 04:30:21 +0000 (0:00:01.473) 0:00:53.640 ****** 2026-02-17 04:30:23.702401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:23.702412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:23.702422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:23.702446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:24.529031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:24.529206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:30:24.529250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:24.529265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:24.529277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:30:24.529289 | orchestrator | 2026-02-17 04:30:24.529302 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-17 04:30:24.529315 | orchestrator | Tuesday 17 February 2026 04:30:23 +0000 (0:00:02.490) 0:00:56.130 ****** 2026-02-17 04:30:24.529343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:24.529373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:24.529393 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:24.529411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:24.529430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:24.529449 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:24.529466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:24.529485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:24.529504 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:24.529530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:24.529550 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:24.529582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:27.797620 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:27.797714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:27.797726 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:27.797731 | orchestrator | 2026-02-17 04:30:27.797736 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-17 04:30:27.797742 | orchestrator | Tuesday 17 February 2026 04:30:24 +0000 (0:00:00.830) 0:00:56.961 ****** 2026-02-17 04:30:27.797746 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:27.797751 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:27.797755 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:27.797759 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:27.797764 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:27.797768 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:27.797773 | orchestrator | 2026-02-17 04:30:27.797777 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-17 04:30:27.797782 | orchestrator | Tuesday 17 February 2026 04:30:25 +0000 (0:00:00.767) 0:00:57.728 ****** 2026-02-17 04:30:27.797787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:27.797794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:27.797799 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:30:27.797816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:27.797843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:27.797847 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:30:27.797865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-17 04:30:27.797870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 04:30:27.797874 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:30:27.797879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:27.797884 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:30:27.797888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:27.797893 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:30:27.797900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-17 04:30:27.797908 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:30:27.797913 | orchestrator | 2026-02-17 04:30:27.797917 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-17 04:30:27.797922 | orchestrator | Tuesday 17 February 2026 04:30:26 +0000 (0:00:00.823) 0:00:58.552 ****** 2026-02-17 04:30:27.797931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:00.242841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:00.242962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:00.242981 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:00.242996 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:00.243049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:00.243076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:31:00.243184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:31:00.243210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-17 04:31:00.243233 | orchestrator | 2026-02-17 04:31:00.243254 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-17 04:31:00.243275 | orchestrator | Tuesday 17 February 2026 04:30:27 +0000 (0:00:01.677) 0:01:00.229 ****** 2026-02-17 04:31:00.243294 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:31:00.243314 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:31:00.243336 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:31:00.243356 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:31:00.243377 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:31:00.243400 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:31:00.243419 | orchestrator | 2026-02-17 04:31:00.243438 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-17 04:31:00.243451 | orchestrator | Tuesday 17 February 2026 04:30:28 +0000 (0:00:00.597) 0:01:00.827 ****** 2026-02-17 04:31:00.243464 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:31:00.243476 | orchestrator | 2026-02-17 04:31:00.243490 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-17 04:31:00.243516 | orchestrator | Tuesday 17 February 2026 04:30:32 +0000 (0:00:04.231) 0:01:05.058 ****** 2026-02-17 04:31:00.243529 | orchestrator | 2026-02-17 04:31:00.243542 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-17 04:31:00.243554 | orchestrator | Tuesday 17 February 2026 04:30:32 +0000 (0:00:00.070) 0:01:05.129 ****** 2026-02-17 04:31:00.243565 | orchestrator | 2026-02-17 04:31:00.243576 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-17 04:31:00.243587 | orchestrator | Tuesday 17 February 2026 04:30:32 +0000 (0:00:00.091) 0:01:05.220 ****** 2026-02-17 04:31:00.243598 | orchestrator | 2026-02-17 04:31:00.243610 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-17 04:31:00.243621 | orchestrator | Tuesday 17 February 2026 04:30:33 +0000 (0:00:00.245) 0:01:05.466 ****** 2026-02-17 04:31:00.243631 | orchestrator | 2026-02-17 04:31:00.243642 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-17 04:31:00.243653 | orchestrator | Tuesday 17 February 2026 04:30:33 +0000 (0:00:00.083) 0:01:05.549 ****** 2026-02-17 04:31:00.243664 | orchestrator | 2026-02-17 04:31:00.243675 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-17 04:31:00.243686 | orchestrator | Tuesday 17 February 2026 04:30:33 +0000 (0:00:00.066) 0:01:05.616 ****** 2026-02-17 04:31:00.243697 | orchestrator | 2026-02-17 04:31:00.243708 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-17 04:31:00.243718 | orchestrator | Tuesday 17 February 2026 04:30:33 +0000 (0:00:00.072) 0:01:05.688 ****** 2026-02-17 04:31:00.243729 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:31:00.243748 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:31:00.243760 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:31:00.243771 | orchestrator | 2026-02-17 04:31:00.243782 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-17 04:31:00.243793 | orchestrator | Tuesday 17 February 2026 04:30:43 +0000 (0:00:10.589) 0:01:16.278 ****** 2026-02-17 04:31:00.243804 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:31:00.243815 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:31:00.243825 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:31:00.243836 | orchestrator | 2026-02-17 04:31:00.243847 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-17 04:31:00.243858 | orchestrator | Tuesday 17 February 2026 04:30:53 +0000 (0:00:09.788) 0:01:26.067 ****** 2026-02-17 04:31:00.243869 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:31:00.243880 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:31:00.243891 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:31:00.243902 | orchestrator | 2026-02-17 04:31:00.243913 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:31:00.243925 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-17 04:31:00.243938 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 04:31:00.243959 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 04:31:00.672698 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-17 04:31:00.672803 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-17 04:31:00.672818 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-17 04:31:00.672831 | orchestrator | 2026-02-17 04:31:00.672843 | orchestrator | 2026-02-17 04:31:00.672854 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:31:00.672893 | orchestrator | Tuesday 17 February 2026 04:31:00 +0000 (0:00:06.602) 0:01:32.669 ****** 2026-02-17 04:31:00.672905 | orchestrator | =============================================================================== 2026-02-17 04:31:00.672916 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.59s 2026-02-17 04:31:00.672927 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.79s 2026-02-17 04:31:00.672938 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.60s 2026-02-17 04:31:00.672949 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.40s 2026-02-17 04:31:00.672960 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.23s 2026-02-17 04:31:00.672971 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.88s 2026-02-17 04:31:00.672982 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.46s 2026-02-17 04:31:00.672993 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.35s 2026-02-17 04:31:00.673003 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.13s 2026-02-17 04:31:00.673014 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.49s 2026-02-17 04:31:00.673025 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.35s 2026-02-17 04:31:00.673036 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.30s 2026-02-17 04:31:00.673047 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.70s 2026-02-17 04:31:00.673059 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.68s 2026-02-17 04:31:00.673070 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.56s 2026-02-17 04:31:00.673083 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.49s 2026-02-17 04:31:00.673152 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.47s 2026-02-17 04:31:00.673181 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.45s 2026-02-17 04:31:00.673203 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.43s 2026-02-17 04:31:00.673223 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.39s 2026-02-17 04:31:03.039374 | orchestrator | 2026-02-17 04:31:03 | INFO  | Task 8673617b-3974-41c7-b017-c5a776c4a294 (aodh) was prepared for execution. 2026-02-17 04:31:03.039458 | orchestrator | 2026-02-17 04:31:03 | INFO  | It takes a moment until task 8673617b-3974-41c7-b017-c5a776c4a294 (aodh) has been started and output is visible here. 2026-02-17 04:31:34.040462 | orchestrator | 2026-02-17 04:31:34.040572 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:31:34.040587 | orchestrator | 2026-02-17 04:31:34.040598 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:31:34.040609 | orchestrator | Tuesday 17 February 2026 04:31:07 +0000 (0:00:00.254) 0:00:00.254 ****** 2026-02-17 04:31:34.040619 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:31:34.040646 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:31:34.040656 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:31:34.040666 | orchestrator | 2026-02-17 04:31:34.040676 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:31:34.040686 | orchestrator | Tuesday 17 February 2026 04:31:07 +0000 (0:00:00.329) 0:00:00.583 ****** 2026-02-17 04:31:34.040696 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-17 04:31:34.040707 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-17 04:31:34.040717 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-17 04:31:34.040727 | orchestrator | 2026-02-17 04:31:34.040737 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-17 04:31:34.040746 | orchestrator | 2026-02-17 04:31:34.040777 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-17 04:31:34.040788 | orchestrator | Tuesday 17 February 2026 04:31:07 +0000 (0:00:00.421) 0:00:01.005 ****** 2026-02-17 04:31:34.040798 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:31:34.040809 | orchestrator | 2026-02-17 04:31:34.040818 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-17 04:31:34.040828 | orchestrator | Tuesday 17 February 2026 04:31:08 +0000 (0:00:00.552) 0:00:01.557 ****** 2026-02-17 04:31:34.040839 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-17 04:31:34.040848 | orchestrator | 2026-02-17 04:31:34.040858 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-17 04:31:34.040868 | orchestrator | Tuesday 17 February 2026 04:31:11 +0000 (0:00:03.297) 0:00:04.854 ****** 2026-02-17 04:31:34.040877 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-17 04:31:34.040888 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-17 04:31:34.040897 | orchestrator | 2026-02-17 04:31:34.040907 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-17 04:31:34.040917 | orchestrator | Tuesday 17 February 2026 04:31:18 +0000 (0:00:06.324) 0:00:11.178 ****** 2026-02-17 04:31:34.040927 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:31:34.040937 | orchestrator | 2026-02-17 04:31:34.040947 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-17 04:31:34.040957 | orchestrator | Tuesday 17 February 2026 04:31:21 +0000 (0:00:03.318) 0:00:14.497 ****** 2026-02-17 04:31:34.040966 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:31:34.040976 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-17 04:31:34.040986 | orchestrator | 2026-02-17 04:31:34.040996 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-17 04:31:34.041008 | orchestrator | Tuesday 17 February 2026 04:31:25 +0000 (0:00:03.837) 0:00:18.334 ****** 2026-02-17 04:31:34.041019 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:31:34.041031 | orchestrator | 2026-02-17 04:31:34.041043 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-17 04:31:34.041054 | orchestrator | Tuesday 17 February 2026 04:31:28 +0000 (0:00:03.191) 0:00:21.525 ****** 2026-02-17 04:31:34.041066 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-17 04:31:34.041077 | orchestrator | 2026-02-17 04:31:34.041112 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-17 04:31:34.041124 | orchestrator | Tuesday 17 February 2026 04:31:31 +0000 (0:00:03.609) 0:00:25.135 ****** 2026-02-17 04:31:34.041139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:34.041177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:34.041200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:34.041212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:34.041226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:34.041238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:34.041250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:34.041269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:35.369229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:35.369338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:35.369356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:35.369368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:35.369380 | orchestrator | 2026-02-17 04:31:35.369394 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-17 04:31:35.369406 | orchestrator | Tuesday 17 February 2026 04:31:34 +0000 (0:00:02.038) 0:00:27.173 ****** 2026-02-17 04:31:35.369418 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:31:35.369430 | orchestrator | 2026-02-17 04:31:35.369442 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-17 04:31:35.369453 | orchestrator | Tuesday 17 February 2026 04:31:34 +0000 (0:00:00.144) 0:00:27.318 ****** 2026-02-17 04:31:35.369464 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:31:35.369475 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:31:35.369486 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:31:35.369497 | orchestrator | 2026-02-17 04:31:35.369508 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-17 04:31:35.369519 | orchestrator | Tuesday 17 February 2026 04:31:34 +0000 (0:00:00.537) 0:00:27.855 ****** 2026-02-17 04:31:35.369531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:35.369605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:35.369628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:35.369648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:35.369669 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:31:35.369688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:35.369710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:35.369744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:35.369772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:40.233360 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:31:40.233478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:40.233498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:40.233513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:40.233525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:40.233556 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:31:40.233569 | orchestrator | 2026-02-17 04:31:40.233581 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-17 04:31:40.233593 | orchestrator | Tuesday 17 February 2026 04:31:35 +0000 (0:00:00.652) 0:00:28.508 ****** 2026-02-17 04:31:40.233605 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:31:40.233616 | orchestrator | 2026-02-17 04:31:40.233627 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-17 04:31:40.233638 | orchestrator | Tuesday 17 February 2026 04:31:36 +0000 (0:00:00.745) 0:00:29.253 ****** 2026-02-17 04:31:40.233650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:40.233688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:40.233701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:40.233713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:40.233733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:40.233744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:40.233756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:40.233781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:40.873894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:40.873999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:40.874065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:40.874137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:40.874152 | orchestrator | 2026-02-17 04:31:40.874166 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-17 04:31:40.874180 | orchestrator | Tuesday 17 February 2026 04:31:40 +0000 (0:00:04.118) 0:00:33.372 ****** 2026-02-17 04:31:40.874195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:40.874224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:40.874299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:40.874316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:40.874329 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:31:40.874344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:40.874367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:40.874381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:40.874400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:40.874414 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:31:40.874437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:41.880117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:41.880215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:41.880224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:41.880232 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:31:41.880240 | orchestrator | 2026-02-17 04:31:41.880246 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-17 04:31:41.880253 | orchestrator | Tuesday 17 February 2026 04:31:40 +0000 (0:00:00.641) 0:00:34.013 ****** 2026-02-17 04:31:41.880259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:41.880282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:41.880288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:41.880306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:41.880317 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:31:41.880323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:41.880329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:41.880335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:41.880344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:41.880350 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:31:41.880360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-17 04:31:45.929392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 04:31:45.929516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 04:31:45.929541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 04:31:45.929561 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:31:45.929579 | orchestrator | 2026-02-17 04:31:45.929599 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-17 04:31:45.929617 | orchestrator | Tuesday 17 February 2026 04:31:41 +0000 (0:00:01.005) 0:00:35.018 ****** 2026-02-17 04:31:45.929635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:45.929674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:45.929718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:45.929765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:45.929785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:45.929805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:45.929825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:45.929850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:45.929871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:45.929907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:54.380828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:54.380954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:54.380973 | orchestrator | 2026-02-17 04:31:54.380987 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-17 04:31:54.381000 | orchestrator | Tuesday 17 February 2026 04:31:45 +0000 (0:00:04.048) 0:00:39.067 ****** 2026-02-17 04:31:54.381012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:54.381041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:54.381129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:54.381162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:54.381175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:54.381187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:54.381198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:54.381215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:54.381236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:54.381248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:54.381268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651237 | orchestrator | 2026-02-17 04:31:59.651255 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-17 04:31:59.651268 | orchestrator | Tuesday 17 February 2026 04:31:54 +0000 (0:00:08.448) 0:00:47.515 ****** 2026-02-17 04:31:59.651280 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:31:59.651293 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:31:59.651304 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:31:59.651314 | orchestrator | 2026-02-17 04:31:59.651326 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-17 04:31:59.651337 | orchestrator | Tuesday 17 February 2026 04:31:56 +0000 (0:00:01.715) 0:00:49.230 ****** 2026-02-17 04:31:59.651350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:59.651401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:59.651415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-17 04:31:59.651444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:31:59.651548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:32:49.351595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-17 04:32:49.351717 | orchestrator | 2026-02-17 04:32:49.351735 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-17 04:32:49.351748 | orchestrator | Tuesday 17 February 2026 04:31:59 +0000 (0:00:03.556) 0:00:52.787 ****** 2026-02-17 04:32:49.351759 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:32:49.351772 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:32:49.351783 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:32:49.351794 | orchestrator | 2026-02-17 04:32:49.351805 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-17 04:32:49.351817 | orchestrator | Tuesday 17 February 2026 04:31:59 +0000 (0:00:00.310) 0:00:53.097 ****** 2026-02-17 04:32:49.351852 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:32:49.351864 | orchestrator | 2026-02-17 04:32:49.351875 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-17 04:32:49.351886 | orchestrator | Tuesday 17 February 2026 04:32:02 +0000 (0:00:02.062) 0:00:55.159 ****** 2026-02-17 04:32:49.351897 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:32:49.351908 | orchestrator | 2026-02-17 04:32:49.351919 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-17 04:32:49.351929 | orchestrator | Tuesday 17 February 2026 04:32:04 +0000 (0:00:02.262) 0:00:57.422 ****** 2026-02-17 04:32:49.351940 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:32:49.351951 | orchestrator | 2026-02-17 04:32:49.351962 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-17 04:32:49.351972 | orchestrator | Tuesday 17 February 2026 04:32:17 +0000 (0:00:13.047) 0:01:10.470 ****** 2026-02-17 04:32:49.351983 | orchestrator | 2026-02-17 04:32:49.351994 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-17 04:32:49.352004 | orchestrator | Tuesday 17 February 2026 04:32:17 +0000 (0:00:00.072) 0:01:10.542 ****** 2026-02-17 04:32:49.352015 | orchestrator | 2026-02-17 04:32:49.352040 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-17 04:32:49.352088 | orchestrator | Tuesday 17 February 2026 04:32:17 +0000 (0:00:00.071) 0:01:10.613 ****** 2026-02-17 04:32:49.352103 | orchestrator | 2026-02-17 04:32:49.352115 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-17 04:32:49.352128 | orchestrator | Tuesday 17 February 2026 04:32:17 +0000 (0:00:00.259) 0:01:10.873 ****** 2026-02-17 04:32:49.352140 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:32:49.352152 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:32:49.352165 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:32:49.352178 | orchestrator | 2026-02-17 04:32:49.352190 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-17 04:32:49.352203 | orchestrator | Tuesday 17 February 2026 04:32:28 +0000 (0:00:10.717) 0:01:21.590 ****** 2026-02-17 04:32:49.352215 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:32:49.352228 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:32:49.352240 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:32:49.352251 | orchestrator | 2026-02-17 04:32:49.352261 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-17 04:32:49.352272 | orchestrator | Tuesday 17 February 2026 04:32:33 +0000 (0:00:04.977) 0:01:26.568 ****** 2026-02-17 04:32:49.352283 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:32:49.352294 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:32:49.352304 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:32:49.352315 | orchestrator | 2026-02-17 04:32:49.352326 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-17 04:32:49.352337 | orchestrator | Tuesday 17 February 2026 04:32:38 +0000 (0:00:05.295) 0:01:31.864 ****** 2026-02-17 04:32:49.352347 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:32:49.352358 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:32:49.352369 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:32:49.352380 | orchestrator | 2026-02-17 04:32:49.352390 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:32:49.352402 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 04:32:49.352415 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 04:32:49.352426 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 04:32:49.352437 | orchestrator | 2026-02-17 04:32:49.352447 | orchestrator | 2026-02-17 04:32:49.352458 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:32:49.352477 | orchestrator | Tuesday 17 February 2026 04:32:48 +0000 (0:00:10.281) 0:01:42.146 ****** 2026-02-17 04:32:49.352488 | orchestrator | =============================================================================== 2026-02-17 04:32:49.352498 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.05s 2026-02-17 04:32:49.352509 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.72s 2026-02-17 04:32:49.352538 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.28s 2026-02-17 04:32:49.352549 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.45s 2026-02-17 04:32:49.352560 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.32s 2026-02-17 04:32:49.352571 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 5.30s 2026-02-17 04:32:49.352582 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 4.98s 2026-02-17 04:32:49.352593 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.12s 2026-02-17 04:32:49.352603 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.05s 2026-02-17 04:32:49.352614 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.84s 2026-02-17 04:32:49.352625 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.61s 2026-02-17 04:32:49.352636 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.56s 2026-02-17 04:32:49.352647 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.32s 2026-02-17 04:32:49.352657 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.30s 2026-02-17 04:32:49.352668 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.19s 2026-02-17 04:32:49.352679 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.26s 2026-02-17 04:32:49.352689 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.06s 2026-02-17 04:32:49.352700 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.04s 2026-02-17 04:32:49.352711 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.72s 2026-02-17 04:32:49.352722 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.01s 2026-02-17 04:32:51.681136 | orchestrator | 2026-02-17 04:32:51 | INFO  | Task bb577a30-64af-4ce3-8f27-d168c4a0faba (kolla-ceph-rgw) was prepared for execution. 2026-02-17 04:32:51.681259 | orchestrator | 2026-02-17 04:32:51 | INFO  | It takes a moment until task bb577a30-64af-4ce3-8f27-d168c4a0faba (kolla-ceph-rgw) has been started and output is visible here. 2026-02-17 04:33:26.719015 | orchestrator | 2026-02-17 04:33:26.719228 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:33:26.719248 | orchestrator | 2026-02-17 04:33:26.719275 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:33:26.719286 | orchestrator | Tuesday 17 February 2026 04:32:55 +0000 (0:00:00.283) 0:00:00.283 ****** 2026-02-17 04:33:26.719297 | orchestrator | ok: [testbed-manager] 2026-02-17 04:33:26.719308 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:33:26.719318 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:33:26.719327 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:33:26.719337 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:33:26.719346 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:33:26.719356 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:33:26.719366 | orchestrator | 2026-02-17 04:33:26.719376 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:33:26.719386 | orchestrator | Tuesday 17 February 2026 04:32:56 +0000 (0:00:00.839) 0:00:01.122 ****** 2026-02-17 04:33:26.719396 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-17 04:33:26.719406 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-17 04:33:26.719416 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-17 04:33:26.719445 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-17 04:33:26.719455 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-17 04:33:26.719464 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-17 04:33:26.719474 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-17 04:33:26.719483 | orchestrator | 2026-02-17 04:33:26.719493 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-17 04:33:26.719503 | orchestrator | 2026-02-17 04:33:26.719512 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-17 04:33:26.719522 | orchestrator | Tuesday 17 February 2026 04:32:57 +0000 (0:00:00.702) 0:00:01.824 ****** 2026-02-17 04:33:26.720489 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:33:26.720590 | orchestrator | 2026-02-17 04:33:26.720607 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-17 04:33:26.720620 | orchestrator | Tuesday 17 February 2026 04:32:58 +0000 (0:00:01.524) 0:00:03.349 ****** 2026-02-17 04:33:26.720631 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-17 04:33:26.720643 | orchestrator | 2026-02-17 04:33:26.720654 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-17 04:33:26.720665 | orchestrator | Tuesday 17 February 2026 04:33:02 +0000 (0:00:03.861) 0:00:07.211 ****** 2026-02-17 04:33:26.720677 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-17 04:33:26.720690 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-17 04:33:26.720701 | orchestrator | 2026-02-17 04:33:26.720712 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-17 04:33:26.720722 | orchestrator | Tuesday 17 February 2026 04:33:08 +0000 (0:00:06.095) 0:00:13.306 ****** 2026-02-17 04:33:26.720734 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-17 04:33:26.720745 | orchestrator | 2026-02-17 04:33:26.720756 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-17 04:33:26.720766 | orchestrator | Tuesday 17 February 2026 04:33:11 +0000 (0:00:03.044) 0:00:16.350 ****** 2026-02-17 04:33:26.720777 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:33:26.720788 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-17 04:33:26.720799 | orchestrator | 2026-02-17 04:33:26.720809 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-17 04:33:26.720820 | orchestrator | Tuesday 17 February 2026 04:33:15 +0000 (0:00:03.636) 0:00:19.987 ****** 2026-02-17 04:33:26.720831 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-17 04:33:26.720903 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-17 04:33:26.720917 | orchestrator | 2026-02-17 04:33:26.720928 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-17 04:33:26.720939 | orchestrator | Tuesday 17 February 2026 04:33:21 +0000 (0:00:06.001) 0:00:25.988 ****** 2026-02-17 04:33:26.720950 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-17 04:33:26.720961 | orchestrator | 2026-02-17 04:33:26.720971 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:33:26.720982 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:26.720994 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:26.721005 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:26.721073 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:26.721087 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:26.721124 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:26.721150 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:26.721161 | orchestrator | 2026-02-17 04:33:26.721173 | orchestrator | 2026-02-17 04:33:26.721184 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:33:26.721195 | orchestrator | Tuesday 17 February 2026 04:33:26 +0000 (0:00:04.755) 0:00:30.744 ****** 2026-02-17 04:33:26.721206 | orchestrator | =============================================================================== 2026-02-17 04:33:26.721216 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.10s 2026-02-17 04:33:26.721227 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.00s 2026-02-17 04:33:26.721237 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.76s 2026-02-17 04:33:26.721248 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.86s 2026-02-17 04:33:26.721259 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.64s 2026-02-17 04:33:26.721270 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.04s 2026-02-17 04:33:26.721280 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.52s 2026-02-17 04:33:26.721291 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-02-17 04:33:26.721302 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-02-17 04:33:29.055853 | orchestrator | 2026-02-17 04:33:29 | INFO  | Task d1b6b806-1724-4fb7-8a9d-cfa91b431a2c (gnocchi) was prepared for execution. 2026-02-17 04:33:29.055951 | orchestrator | 2026-02-17 04:33:29 | INFO  | It takes a moment until task d1b6b806-1724-4fb7-8a9d-cfa91b431a2c (gnocchi) has been started and output is visible here. 2026-02-17 04:33:34.172770 | orchestrator | 2026-02-17 04:33:34.172905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:33:34.172930 | orchestrator | 2026-02-17 04:33:34.172948 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:33:34.172966 | orchestrator | Tuesday 17 February 2026 04:33:33 +0000 (0:00:00.258) 0:00:00.258 ****** 2026-02-17 04:33:34.172984 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:33:34.173002 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:33:34.173018 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:33:34.173062 | orchestrator | 2026-02-17 04:33:34.173082 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:33:34.173099 | orchestrator | Tuesday 17 February 2026 04:33:33 +0000 (0:00:00.330) 0:00:00.589 ****** 2026-02-17 04:33:34.173116 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-17 04:33:34.173134 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-17 04:33:34.173152 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-17 04:33:34.173170 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-17 04:33:34.173186 | orchestrator | 2026-02-17 04:33:34.173203 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-17 04:33:34.173220 | orchestrator | skipping: no hosts matched 2026-02-17 04:33:34.173237 | orchestrator | 2026-02-17 04:33:34.173254 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:33:34.173271 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:34.173322 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:34.173341 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:33:34.173357 | orchestrator | 2026-02-17 04:33:34.173374 | orchestrator | 2026-02-17 04:33:34.173391 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:33:34.173410 | orchestrator | Tuesday 17 February 2026 04:33:33 +0000 (0:00:00.367) 0:00:00.956 ****** 2026-02-17 04:33:34.173426 | orchestrator | =============================================================================== 2026-02-17 04:33:34.173443 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-02-17 04:33:34.173460 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-02-17 04:33:36.441023 | orchestrator | 2026-02-17 04:33:36 | INFO  | Task 01a7e126-e368-4934-882c-b5e7b1aa0c8c (manila) was prepared for execution. 2026-02-17 04:33:36.441158 | orchestrator | 2026-02-17 04:33:36 | INFO  | It takes a moment until task 01a7e126-e368-4934-882c-b5e7b1aa0c8c (manila) has been started and output is visible here. 2026-02-17 04:34:16.898869 | orchestrator | 2026-02-17 04:34:16.898987 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:34:16.899006 | orchestrator | 2026-02-17 04:34:16.899019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:34:16.899079 | orchestrator | Tuesday 17 February 2026 04:33:40 +0000 (0:00:00.259) 0:00:00.259 ****** 2026-02-17 04:34:16.899091 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:34:16.899104 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:34:16.899116 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:34:16.899127 | orchestrator | 2026-02-17 04:34:16.899139 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:34:16.899151 | orchestrator | Tuesday 17 February 2026 04:33:40 +0000 (0:00:00.329) 0:00:00.588 ****** 2026-02-17 04:34:16.899162 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-17 04:34:16.899174 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-17 04:34:16.899202 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-17 04:34:16.899214 | orchestrator | 2026-02-17 04:34:16.899225 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-17 04:34:16.899236 | orchestrator | 2026-02-17 04:34:16.899247 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-17 04:34:16.899259 | orchestrator | Tuesday 17 February 2026 04:33:41 +0000 (0:00:00.468) 0:00:01.057 ****** 2026-02-17 04:34:16.899270 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:34:16.899282 | orchestrator | 2026-02-17 04:34:16.899293 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-17 04:34:16.899304 | orchestrator | Tuesday 17 February 2026 04:33:41 +0000 (0:00:00.548) 0:00:01.605 ****** 2026-02-17 04:34:16.899316 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:34:16.899327 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:34:16.899338 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:34:16.899349 | orchestrator | 2026-02-17 04:34:16.899360 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-17 04:34:16.899371 | orchestrator | Tuesday 17 February 2026 04:33:42 +0000 (0:00:00.437) 0:00:02.043 ****** 2026-02-17 04:34:16.899382 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-17 04:34:16.899394 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-17 04:34:16.899408 | orchestrator | 2026-02-17 04:34:16.899420 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-17 04:34:16.899454 | orchestrator | Tuesday 17 February 2026 04:33:48 +0000 (0:00:06.364) 0:00:08.407 ****** 2026-02-17 04:34:16.899467 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-17 04:34:16.899481 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-17 04:34:16.899493 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-17 04:34:16.899506 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-17 04:34:16.899519 | orchestrator | 2026-02-17 04:34:16.899531 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-17 04:34:16.899544 | orchestrator | Tuesday 17 February 2026 04:34:00 +0000 (0:00:12.249) 0:00:20.657 ****** 2026-02-17 04:34:16.899557 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:34:16.899569 | orchestrator | 2026-02-17 04:34:16.899582 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-17 04:34:16.899594 | orchestrator | Tuesday 17 February 2026 04:34:04 +0000 (0:00:03.164) 0:00:23.822 ****** 2026-02-17 04:34:16.899605 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:34:16.899616 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-17 04:34:16.899627 | orchestrator | 2026-02-17 04:34:16.899638 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-17 04:34:16.899649 | orchestrator | Tuesday 17 February 2026 04:34:07 +0000 (0:00:03.685) 0:00:27.508 ****** 2026-02-17 04:34:16.899661 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:34:16.899672 | orchestrator | 2026-02-17 04:34:16.899683 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-17 04:34:16.899694 | orchestrator | Tuesday 17 February 2026 04:34:10 +0000 (0:00:03.143) 0:00:30.652 ****** 2026-02-17 04:34:16.899705 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-17 04:34:16.899716 | orchestrator | 2026-02-17 04:34:16.899727 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-17 04:34:16.899738 | orchestrator | Tuesday 17 February 2026 04:34:14 +0000 (0:00:03.737) 0:00:34.389 ****** 2026-02-17 04:34:16.899772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:16.899794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:16.899814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:16.899827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:16.899840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:16.899851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:16.899871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:27.634303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:27.634482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:27.634513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:27.634534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:27.634554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:27.634575 | orchestrator | 2026-02-17 04:34:27.634597 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-17 04:34:27.634618 | orchestrator | Tuesday 17 February 2026 04:34:16 +0000 (0:00:02.286) 0:00:36.675 ****** 2026-02-17 04:34:27.634638 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:34:27.634660 | orchestrator | 2026-02-17 04:34:27.634681 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-17 04:34:27.634702 | orchestrator | Tuesday 17 February 2026 04:34:17 +0000 (0:00:00.541) 0:00:37.216 ****** 2026-02-17 04:34:27.634723 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:34:27.634747 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:34:27.634768 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:34:27.634787 | orchestrator | 2026-02-17 04:34:27.634807 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-17 04:34:27.634827 | orchestrator | Tuesday 17 February 2026 04:34:18 +0000 (0:00:01.042) 0:00:38.259 ****** 2026-02-17 04:34:27.634848 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-17 04:34:27.634908 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-17 04:34:27.634930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-17 04:34:27.634963 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-17 04:34:27.634983 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-17 04:34:27.635002 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-17 04:34:27.635048 | orchestrator | 2026-02-17 04:34:27.635071 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-17 04:34:27.635091 | orchestrator | Tuesday 17 February 2026 04:34:20 +0000 (0:00:01.887) 0:00:40.147 ****** 2026-02-17 04:34:27.635111 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-17 04:34:27.635130 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-17 04:34:27.635149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-17 04:34:27.635169 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-17 04:34:27.635188 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-17 04:34:27.635208 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-17 04:34:27.635228 | orchestrator | 2026-02-17 04:34:27.635247 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-17 04:34:27.635267 | orchestrator | Tuesday 17 February 2026 04:34:21 +0000 (0:00:01.191) 0:00:41.339 ****** 2026-02-17 04:34:27.635285 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-17 04:34:27.635304 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-17 04:34:27.635321 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-17 04:34:27.635339 | orchestrator | 2026-02-17 04:34:27.635359 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-17 04:34:27.635378 | orchestrator | Tuesday 17 February 2026 04:34:22 +0000 (0:00:00.695) 0:00:42.034 ****** 2026-02-17 04:34:27.635398 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:34:27.635419 | orchestrator | 2026-02-17 04:34:27.635438 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-17 04:34:27.635457 | orchestrator | Tuesday 17 February 2026 04:34:22 +0000 (0:00:00.148) 0:00:42.183 ****** 2026-02-17 04:34:27.635478 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:34:27.635497 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:34:27.635517 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:34:27.635537 | orchestrator | 2026-02-17 04:34:27.635556 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-17 04:34:27.635575 | orchestrator | Tuesday 17 February 2026 04:34:23 +0000 (0:00:00.510) 0:00:42.693 ****** 2026-02-17 04:34:27.635609 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:34:27.635627 | orchestrator | 2026-02-17 04:34:27.635646 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-17 04:34:27.635664 | orchestrator | Tuesday 17 February 2026 04:34:23 +0000 (0:00:00.567) 0:00:43.261 ****** 2026-02-17 04:34:27.635703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:28.474175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:28.474282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:28.474298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:28.474460 | orchestrator | 2026-02-17 04:34:28.474473 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-17 04:34:28.474486 | orchestrator | Tuesday 17 February 2026 04:34:27 +0000 (0:00:04.149) 0:00:47.410 ****** 2026-02-17 04:34:28.474506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:29.096139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096246 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:34:29.096255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:29.096283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096324 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:34:29.096330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:29.096337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:29.096362 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:34:29.096369 | orchestrator | 2026-02-17 04:34:29.096376 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-17 04:34:29.096384 | orchestrator | Tuesday 17 February 2026 04:34:28 +0000 (0:00:00.852) 0:00:48.263 ****** 2026-02-17 04:34:29.096400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:33.650956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651175 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:34:33.651191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:33.651204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651274 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:34:33.651285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:33.651306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:33.651341 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:34:33.651353 | orchestrator | 2026-02-17 04:34:33.651365 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-17 04:34:33.651377 | orchestrator | Tuesday 17 February 2026 04:34:29 +0000 (0:00:00.835) 0:00:49.098 ****** 2026-02-17 04:34:33.651402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:40.180334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:40.180502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:40.180535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:40.180716 | orchestrator | 2026-02-17 04:34:40.180729 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-17 04:34:40.180747 | orchestrator | Tuesday 17 February 2026 04:34:33 +0000 (0:00:04.530) 0:00:53.629 ****** 2026-02-17 04:34:40.180766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:44.303134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:44.303261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:34:44.303280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:44.303295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:44.303324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:44.303365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:44.303406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:44.303428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:44.303446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:44.303463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:44.303489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:34:44.303511 | orchestrator | 2026-02-17 04:34:44.303569 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-17 04:34:44.303594 | orchestrator | Tuesday 17 February 2026 04:34:40 +0000 (0:00:06.329) 0:00:59.959 ****** 2026-02-17 04:34:44.303629 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-17 04:34:44.303651 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-17 04:34:44.303667 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-17 04:34:44.303679 | orchestrator | 2026-02-17 04:34:44.303692 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-17 04:34:44.303705 | orchestrator | Tuesday 17 February 2026 04:34:43 +0000 (0:00:03.490) 0:01:03.449 ****** 2026-02-17 04:34:44.303731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:47.579545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579689 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:34:47.579718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:47.579757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579860 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:34:47.579898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-17 04:34:47.579909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 04:34:47.579957 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:34:47.579968 | orchestrator | 2026-02-17 04:34:47.579979 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-17 04:34:47.579992 | orchestrator | Tuesday 17 February 2026 04:34:44 +0000 (0:00:00.640) 0:01:04.090 ****** 2026-02-17 04:34:47.580061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:35:28.607543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:35:28.607722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-17 04:35:28.607801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.607824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.607842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.607884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.607905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.607926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.607946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.607989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.608043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-17 04:35:28.608062 | orchestrator | 2026-02-17 04:35:28.608086 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-17 04:35:28.608110 | orchestrator | Tuesday 17 February 2026 04:34:47 +0000 (0:00:03.268) 0:01:07.359 ****** 2026-02-17 04:35:28.608132 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:35:28.608156 | orchestrator | 2026-02-17 04:35:28.608179 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-17 04:35:28.608200 | orchestrator | Tuesday 17 February 2026 04:34:49 +0000 (0:00:02.033) 0:01:09.392 ****** 2026-02-17 04:35:28.608222 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:35:28.608244 | orchestrator | 2026-02-17 04:35:28.608263 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-17 04:35:28.608284 | orchestrator | Tuesday 17 February 2026 04:34:52 +0000 (0:00:02.373) 0:01:11.766 ****** 2026-02-17 04:35:28.608306 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:35:28.608328 | orchestrator | 2026-02-17 04:35:28.608350 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-17 04:35:28.608372 | orchestrator | Tuesday 17 February 2026 04:35:28 +0000 (0:00:36.296) 0:01:48.062 ****** 2026-02-17 04:35:28.608393 | orchestrator | 2026-02-17 04:35:28.608428 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-17 04:36:23.093096 | orchestrator | Tuesday 17 February 2026 04:35:28 +0000 (0:00:00.072) 0:01:48.134 ****** 2026-02-17 04:36:23.093206 | orchestrator | 2026-02-17 04:36:23.093223 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-17 04:36:23.093235 | orchestrator | Tuesday 17 February 2026 04:35:28 +0000 (0:00:00.071) 0:01:48.205 ****** 2026-02-17 04:36:23.093246 | orchestrator | 2026-02-17 04:36:23.093258 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-17 04:36:23.093269 | orchestrator | Tuesday 17 February 2026 04:35:28 +0000 (0:00:00.072) 0:01:48.278 ****** 2026-02-17 04:36:23.093280 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:36:23.093293 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:36:23.093304 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:36:23.093315 | orchestrator | 2026-02-17 04:36:23.093326 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-17 04:36:23.093364 | orchestrator | Tuesday 17 February 2026 04:35:43 +0000 (0:00:15.167) 0:02:03.445 ****** 2026-02-17 04:36:23.093376 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:36:23.093387 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:36:23.093398 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:36:23.093409 | orchestrator | 2026-02-17 04:36:23.093420 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-17 04:36:23.093431 | orchestrator | Tuesday 17 February 2026 04:35:54 +0000 (0:00:10.806) 0:02:14.252 ****** 2026-02-17 04:36:23.093442 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:36:23.093453 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:36:23.093463 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:36:23.093474 | orchestrator | 2026-02-17 04:36:23.093485 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-17 04:36:23.093496 | orchestrator | Tuesday 17 February 2026 04:36:04 +0000 (0:00:10.255) 0:02:24.508 ****** 2026-02-17 04:36:23.093506 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:36:23.093517 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:36:23.093543 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:36:23.093554 | orchestrator | 2026-02-17 04:36:23.093575 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:36:23.093587 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 04:36:23.093601 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 04:36:23.093615 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 04:36:23.093627 | orchestrator | 2026-02-17 04:36:23.093639 | orchestrator | 2026-02-17 04:36:23.093652 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:36:23.093665 | orchestrator | Tuesday 17 February 2026 04:36:22 +0000 (0:00:17.853) 0:02:42.361 ****** 2026-02-17 04:36:23.093677 | orchestrator | =============================================================================== 2026-02-17 04:36:23.093690 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 36.30s 2026-02-17 04:36:23.093718 | orchestrator | manila : Restart manila-share container -------------------------------- 17.85s 2026-02-17 04:36:23.093731 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.17s 2026-02-17 04:36:23.093744 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.25s 2026-02-17 04:36:23.093756 | orchestrator | manila : Restart manila-data container --------------------------------- 10.81s 2026-02-17 04:36:23.093768 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.26s 2026-02-17 04:36:23.093781 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.36s 2026-02-17 04:36:23.093793 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.33s 2026-02-17 04:36:23.093806 | orchestrator | manila : Copying over config.json files for services -------------------- 4.53s 2026-02-17 04:36:23.093818 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.15s 2026-02-17 04:36:23.093831 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.74s 2026-02-17 04:36:23.093843 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.69s 2026-02-17 04:36:23.093855 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.49s 2026-02-17 04:36:23.093868 | orchestrator | manila : Check manila containers ---------------------------------------- 3.27s 2026-02-17 04:36:23.093880 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.16s 2026-02-17 04:36:23.093892 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.14s 2026-02-17 04:36:23.093905 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.37s 2026-02-17 04:36:23.093925 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.29s 2026-02-17 04:36:23.093938 | orchestrator | manila : Creating Manila database --------------------------------------- 2.03s 2026-02-17 04:36:23.093952 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.89s 2026-02-17 04:36:23.421570 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-17 04:36:35.552819 | orchestrator | 2026-02-17 04:36:35 | INFO  | Task bd7d72d8-bb6b-4beb-9f92-b1910f70e69e (netdata) was prepared for execution. 2026-02-17 04:36:35.552954 | orchestrator | 2026-02-17 04:36:35 | INFO  | It takes a moment until task bd7d72d8-bb6b-4beb-9f92-b1910f70e69e (netdata) has been started and output is visible here. 2026-02-17 04:38:15.920446 | orchestrator | 2026-02-17 04:38:15.920582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:38:15.920610 | orchestrator | 2026-02-17 04:38:15.920631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:38:15.920651 | orchestrator | Tuesday 17 February 2026 04:36:39 +0000 (0:00:00.252) 0:00:00.252 ****** 2026-02-17 04:38:15.920671 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-17 04:38:15.920690 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-17 04:38:15.920706 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-17 04:38:15.920717 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-17 04:38:15.920729 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-17 04:38:15.920740 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-17 04:38:15.920750 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-17 04:38:15.920761 | orchestrator | 2026-02-17 04:38:15.920772 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-17 04:38:15.920783 | orchestrator | 2026-02-17 04:38:15.920794 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-17 04:38:15.920805 | orchestrator | Tuesday 17 February 2026 04:36:40 +0000 (0:00:00.868) 0:00:01.120 ****** 2026-02-17 04:38:15.920818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:38:15.920831 | orchestrator | 2026-02-17 04:38:15.920843 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-17 04:38:15.920854 | orchestrator | Tuesday 17 February 2026 04:36:42 +0000 (0:00:01.320) 0:00:02.441 ****** 2026-02-17 04:38:15.920865 | orchestrator | ok: [testbed-manager] 2026-02-17 04:38:15.920878 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:38:15.920890 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:38:15.920900 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:38:15.920911 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:38:15.920922 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:38:15.920933 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:38:15.920944 | orchestrator | 2026-02-17 04:38:15.920955 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-17 04:38:15.920966 | orchestrator | Tuesday 17 February 2026 04:36:44 +0000 (0:00:01.937) 0:00:04.378 ****** 2026-02-17 04:38:15.920978 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:38:15.921049 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:38:15.921060 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:38:15.921071 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:38:15.921083 | orchestrator | ok: [testbed-manager] 2026-02-17 04:38:15.921093 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:38:15.921104 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:38:15.921115 | orchestrator | 2026-02-17 04:38:15.921127 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-17 04:38:15.921163 | orchestrator | Tuesday 17 February 2026 04:36:46 +0000 (0:00:02.149) 0:00:06.528 ****** 2026-02-17 04:38:15.921175 | orchestrator | changed: [testbed-manager] 2026-02-17 04:38:15.921186 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:38:15.921211 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:38:15.921222 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:38:15.921233 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:38:15.921245 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:38:15.921255 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:38:15.921265 | orchestrator | 2026-02-17 04:38:15.921275 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-17 04:38:15.921285 | orchestrator | Tuesday 17 February 2026 04:36:47 +0000 (0:00:01.537) 0:00:08.065 ****** 2026-02-17 04:38:15.921294 | orchestrator | changed: [testbed-manager] 2026-02-17 04:38:15.921304 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:38:15.921314 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:38:15.921323 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:38:15.921333 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:38:15.921343 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:38:15.921352 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:38:15.921362 | orchestrator | 2026-02-17 04:38:15.921371 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-17 04:38:15.921381 | orchestrator | Tuesday 17 February 2026 04:37:08 +0000 (0:00:20.278) 0:00:28.343 ****** 2026-02-17 04:38:15.921391 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:38:15.921400 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:38:15.921410 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:38:15.921420 | orchestrator | changed: [testbed-manager] 2026-02-17 04:38:15.921429 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:38:15.921439 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:38:15.921449 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:38:15.921458 | orchestrator | 2026-02-17 04:38:15.921468 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-17 04:38:15.921478 | orchestrator | Tuesday 17 February 2026 04:37:49 +0000 (0:00:41.530) 0:01:09.874 ****** 2026-02-17 04:38:15.921488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:38:15.921500 | orchestrator | 2026-02-17 04:38:15.921510 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-17 04:38:15.921520 | orchestrator | Tuesday 17 February 2026 04:37:51 +0000 (0:00:01.581) 0:01:11.456 ****** 2026-02-17 04:38:15.921529 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-17 04:38:15.921539 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-17 04:38:15.921549 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-17 04:38:15.921559 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-17 04:38:15.921588 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-17 04:38:15.921598 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-17 04:38:15.921608 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-17 04:38:15.921618 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-17 04:38:15.921628 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-17 04:38:15.921637 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-17 04:38:15.921647 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-17 04:38:15.921656 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-17 04:38:15.921666 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-17 04:38:15.921675 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-17 04:38:15.921685 | orchestrator | 2026-02-17 04:38:15.921695 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-17 04:38:15.921712 | orchestrator | Tuesday 17 February 2026 04:37:54 +0000 (0:00:03.549) 0:01:15.005 ****** 2026-02-17 04:38:15.921722 | orchestrator | ok: [testbed-manager] 2026-02-17 04:38:15.921732 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:38:15.921742 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:38:15.921751 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:38:15.921761 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:38:15.921770 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:38:15.921779 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:38:15.921789 | orchestrator | 2026-02-17 04:38:15.921799 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-17 04:38:15.921809 | orchestrator | Tuesday 17 February 2026 04:37:55 +0000 (0:00:01.224) 0:01:16.230 ****** 2026-02-17 04:38:15.921818 | orchestrator | changed: [testbed-manager] 2026-02-17 04:38:15.921828 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:38:15.921838 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:38:15.921847 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:38:15.921857 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:38:15.921867 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:38:15.921876 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:38:15.921886 | orchestrator | 2026-02-17 04:38:15.921895 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-17 04:38:15.921905 | orchestrator | Tuesday 17 February 2026 04:37:57 +0000 (0:00:01.323) 0:01:17.554 ****** 2026-02-17 04:38:15.921915 | orchestrator | ok: [testbed-manager] 2026-02-17 04:38:15.921925 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:38:15.921934 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:38:15.921944 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:38:15.921953 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:38:15.921963 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:38:15.921972 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:38:15.922002 | orchestrator | 2026-02-17 04:38:15.922013 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-17 04:38:15.922081 | orchestrator | Tuesday 17 February 2026 04:37:58 +0000 (0:00:01.267) 0:01:18.821 ****** 2026-02-17 04:38:15.922091 | orchestrator | ok: [testbed-manager] 2026-02-17 04:38:15.922101 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:38:15.922110 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:38:15.922119 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:38:15.922129 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:38:15.922138 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:38:15.922148 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:38:15.922157 | orchestrator | 2026-02-17 04:38:15.922173 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-17 04:38:15.922183 | orchestrator | Tuesday 17 February 2026 04:38:01 +0000 (0:00:02.683) 0:01:21.505 ****** 2026-02-17 04:38:15.922193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-17 04:38:15.922205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:38:15.922215 | orchestrator | 2026-02-17 04:38:15.922224 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-17 04:38:15.922234 | orchestrator | Tuesday 17 February 2026 04:38:02 +0000 (0:00:01.415) 0:01:22.920 ****** 2026-02-17 04:38:15.922243 | orchestrator | changed: [testbed-manager] 2026-02-17 04:38:15.922253 | orchestrator | 2026-02-17 04:38:15.922263 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-17 04:38:15.922272 | orchestrator | Tuesday 17 February 2026 04:38:04 +0000 (0:00:02.037) 0:01:24.958 ****** 2026-02-17 04:38:15.922282 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:38:15.922292 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:38:15.922301 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:38:15.922318 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:38:15.922328 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:38:15.922337 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:38:15.922347 | orchestrator | changed: [testbed-manager] 2026-02-17 04:38:15.922357 | orchestrator | 2026-02-17 04:38:15.922366 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:38:15.922376 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:38:15.922387 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:38:15.922397 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:38:15.922407 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:38:15.922423 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:38:16.334302 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:38:16.334401 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:38:16.334415 | orchestrator | 2026-02-17 04:38:16.334427 | orchestrator | 2026-02-17 04:38:16.334439 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:38:16.334452 | orchestrator | Tuesday 17 February 2026 04:38:15 +0000 (0:00:11.224) 0:01:36.183 ****** 2026-02-17 04:38:16.334463 | orchestrator | =============================================================================== 2026-02-17 04:38:16.334474 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.53s 2026-02-17 04:38:16.334484 | orchestrator | osism.services.netdata : Add repository -------------------------------- 20.28s 2026-02-17 04:38:16.334495 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.22s 2026-02-17 04:38:16.334506 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.55s 2026-02-17 04:38:16.334517 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.68s 2026-02-17 04:38:16.334527 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.15s 2026-02-17 04:38:16.334545 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.04s 2026-02-17 04:38:16.334564 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.94s 2026-02-17 04:38:16.334583 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.58s 2026-02-17 04:38:16.334601 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.54s 2026-02-17 04:38:16.334620 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.42s 2026-02-17 04:38:16.334640 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.32s 2026-02-17 04:38:16.334660 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.32s 2026-02-17 04:38:16.334679 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.27s 2026-02-17 04:38:16.334698 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.22s 2026-02-17 04:38:16.334719 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-02-17 04:38:20.038477 | orchestrator | 2026-02-17 04:38:20 | INFO  | Task fcede871-7657-46c8-84dc-f9875552530b (prometheus) was prepared for execution. 2026-02-17 04:38:20.038603 | orchestrator | 2026-02-17 04:38:20 | INFO  | It takes a moment until task fcede871-7657-46c8-84dc-f9875552530b (prometheus) has been started and output is visible here. 2026-02-17 04:38:28.255243 | orchestrator | 2026-02-17 04:38:28.255355 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:38:28.255372 | orchestrator | 2026-02-17 04:38:28.255384 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:38:28.255395 | orchestrator | Tuesday 17 February 2026 04:38:24 +0000 (0:00:00.278) 0:00:00.278 ****** 2026-02-17 04:38:28.255407 | orchestrator | ok: [testbed-manager] 2026-02-17 04:38:28.255419 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:38:28.255430 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:38:28.255442 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:38:28.255453 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:38:28.255464 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:38:28.255475 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:38:28.255495 | orchestrator | 2026-02-17 04:38:28.255514 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:38:28.255533 | orchestrator | Tuesday 17 February 2026 04:38:24 +0000 (0:00:00.762) 0:00:01.041 ****** 2026-02-17 04:38:28.255554 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-17 04:38:28.255572 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-17 04:38:28.255590 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-17 04:38:28.255609 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-17 04:38:28.255628 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-17 04:38:28.255648 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-17 04:38:28.255666 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-17 04:38:28.255684 | orchestrator | 2026-02-17 04:38:28.255703 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-17 04:38:28.255724 | orchestrator | 2026-02-17 04:38:28.255746 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-17 04:38:28.255765 | orchestrator | Tuesday 17 February 2026 04:38:25 +0000 (0:00:00.648) 0:00:01.689 ****** 2026-02-17 04:38:28.255786 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:38:28.255809 | orchestrator | 2026-02-17 04:38:28.255830 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-17 04:38:28.255849 | orchestrator | Tuesday 17 February 2026 04:38:26 +0000 (0:00:01.007) 0:00:02.697 ****** 2026-02-17 04:38:28.255874 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-17 04:38:28.255900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:28.255924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:28.255969 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:28.256040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:28.256074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:28.256095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:28.256117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:28.256137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:28.256156 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:28.256181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:28.256210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:29.344209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:29.344323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:29.344354 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-17 04:38:29.344371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:29.344410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:29.344422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:29.344465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:29.344478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:29.344490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:29.344501 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:29.344513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:29.344524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:29.344543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:29.344555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:29.344579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:33.806691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:33.806838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:33.806866 | orchestrator | 2026-02-17 04:38:33.806889 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-17 04:38:33.806911 | orchestrator | Tuesday 17 February 2026 04:38:29 +0000 (0:00:02.696) 0:00:05.393 ****** 2026-02-17 04:38:33.806931 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 04:38:33.806952 | orchestrator | 2026-02-17 04:38:33.806971 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-17 04:38:33.807020 | orchestrator | Tuesday 17 February 2026 04:38:30 +0000 (0:00:01.370) 0:00:06.764 ****** 2026-02-17 04:38:33.807041 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-17 04:38:33.807092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:33.807113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:33.807150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:33.807195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:33.807218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:33.807240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:33.807262 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:33.807297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:33.807319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:33.807341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:33.807369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:33.807403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004312 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:36.004366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:36.004376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:36.004387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004456 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-17 04:38:36.004475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:36.004511 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:36.004522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:36.004539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:37.224618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:37.224724 | orchestrator | 2026-02-17 04:38:37.224736 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-17 04:38:37.224746 | orchestrator | Tuesday 17 February 2026 04:38:35 +0000 (0:00:05.286) 0:00:12.051 ****** 2026-02-17 04:38:37.224755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-17 04:38:37.224764 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:37.224773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:37.224782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:37.224820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.224845 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-17 04:38:37.224861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.224870 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.224878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:37.224886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.224894 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:38:37.224907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:37.224915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.224933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.805071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:37.805143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.805150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:37.805155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.805160 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:38:37.805166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.805170 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:38:37.805187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:37.805205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:37.805210 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:38:37.805224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:37.805228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:37.805232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 04:38:37.805236 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:38:37.805240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:37.805244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:37.805251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 04:38:37.805258 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:38:37.805262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:37.805269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:38.983218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 04:38:38.983355 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:38:38.983386 | orchestrator | 2026-02-17 04:38:38.983408 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-17 04:38:38.983429 | orchestrator | Tuesday 17 February 2026 04:38:37 +0000 (0:00:01.798) 0:00:13.850 ****** 2026-02-17 04:38:38.983449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-17 04:38:38.983471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:38.983493 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:38.983568 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-17 04:38:38.983606 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:38.983619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:38.983631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:38.983643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:38.983655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:38.983667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:38.983686 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:38:38.983705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:38.983716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:38.983736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:40.143681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:40.143798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:40.143817 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:38:40.143832 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:38:40.143844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:40.143856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:40.143909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:40.143922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:40.143934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 04:38:40.143945 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:38:40.143975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:40.144048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:40.144062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 04:38:40.144074 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:38:40.144086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:40.144106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:40.144124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 04:38:40.144136 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:38:40.144147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 04:38:40.144167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 04:38:43.559742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 04:38:43.559851 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:38:43.559868 | orchestrator | 2026-02-17 04:38:43.559881 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-17 04:38:43.559895 | orchestrator | Tuesday 17 February 2026 04:38:40 +0000 (0:00:02.338) 0:00:16.188 ****** 2026-02-17 04:38:43.559907 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-17 04:38:43.559943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:43.559970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:43.560021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:43.560036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:43.560066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:43.560079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:43.560090 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:38:43.560110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:43.560122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:43.560138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:43.560151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:43.560164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:43.560183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:46.305028 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:46.305139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:46.305180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:46.305193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:46.305218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:46.305232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:46.305243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:38:46.305275 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-17 04:38:46.305299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:46.305311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:46.305323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:38:46.305339 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:46.305351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:46.305363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:46.305384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:38:50.300543 | orchestrator | 2026-02-17 04:38:50.300675 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-17 04:38:50.300694 | orchestrator | Tuesday 17 February 2026 04:38:46 +0000 (0:00:06.163) 0:00:22.352 ****** 2026-02-17 04:38:50.300706 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 04:38:50.300719 | orchestrator | 2026-02-17 04:38:50.300731 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-17 04:38:50.300742 | orchestrator | Tuesday 17 February 2026 04:38:47 +0000 (0:00:00.914) 0:00:23.267 ****** 2026-02-17 04:38:50.300755 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095654, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.747833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300771 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095654, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.747833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300796 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095654, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.747833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:38:50.300809 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095654, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.747833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300820 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095654, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.747833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300832 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095654, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.747833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300871 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095682, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7524412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300885 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095682, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7524412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300896 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095654, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.747833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300914 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095682, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7524412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300926 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095682, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7524412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300937 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095644, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7461998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300949 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095682, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7524412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:50.300975 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095644, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7461998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.096900 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095672, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7506723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097021 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095682, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7524412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097053 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095644, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7461998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097067 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095644, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7461998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097079 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095644, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7461998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097090 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095672, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7506723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097119 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095672, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7506723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097148 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095640, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7450445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097161 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095644, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7461998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097177 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095682, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7524412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:38:52.097189 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095672, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7506723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097201 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095640, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7450445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097219 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095672, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7506723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097230 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095640, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7450445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:52.097250 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095656, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7481084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449205 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095656, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7481084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449272 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095640, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7450445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449280 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095672, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7506723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449285 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095664, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7491345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449300 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095656, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7481084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449305 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095640, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7450445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449312 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095657, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7483323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449331 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095640, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7450445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449340 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095656, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7481084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449351 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095664, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7491345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449357 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095664, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7491345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449366 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095644, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7461998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:38:53.449371 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095652, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7471344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449376 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095656, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7481084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:53.449384 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095656, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7481084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788628 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095664, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7491345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788744 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095657, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7483323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788763 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095679, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7521682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788795 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095657, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7483323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788807 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095664, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7491345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788818 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095657, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7483323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788830 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095664, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7491345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788858 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095652, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7471344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788876 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095652, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7471344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788896 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095657, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7483323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788907 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095652, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7471344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788919 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095636, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7441344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788930 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095657, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7483323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:54.788942 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095672, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7506723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:38:54.788961 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095652, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7471344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234478 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095679, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7521682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234592 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095679, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7521682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234608 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095679, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7521682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234620 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095636, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7441344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234631 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095698, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7550108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234643 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095679, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7521682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234654 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095636, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7441344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234688 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095652, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7471344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234708 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095636, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7441344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234719 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095698, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7550108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234731 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095676, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.751252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234743 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095698, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7550108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234754 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095676, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.751252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234766 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095636, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7441344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:56.234794 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095642, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7452862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459598 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095679, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7521682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459696 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095698, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7550108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459712 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095642, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7452862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459725 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095676, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.751252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459737 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095640, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7450445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:38:57.459749 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095637, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7447872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459793 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095637, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7447872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459824 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095698, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7550108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459837 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095636, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7441344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459848 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095676, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.751252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459859 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095642, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7452862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459870 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095662, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.749124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459881 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095662, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.749124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459905 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095698, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7550108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:57.459923 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095660, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7487683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.623850 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095656, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7481084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:38:58.623939 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095676, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.751252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.623955 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095660, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7487683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.623966 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095637, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7447872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.623978 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095693, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7546108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624032 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:38:58.624060 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095642, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7452862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624088 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095676, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.751252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624101 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095662, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.749124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624112 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095693, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7546108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624123 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:38:58.624135 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095642, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7452862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624146 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095642, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7452862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624165 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095660, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7487683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095637, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7447872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:38:58.624216 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095662, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.749124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017516 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095693, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7546108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017655 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:04.017686 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095637, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7447872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017707 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095637, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7447872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017728 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095660, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7487683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017778 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095664, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7491345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:04.017817 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095662, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.749124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017863 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095662, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.749124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017885 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095693, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7546108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017904 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:04.017924 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095660, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7487683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017943 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095660, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7487683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.017977 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095693, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7546108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.018092 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:04.018116 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095693, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7546108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-17 04:39:04.018137 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:04.018165 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095657, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7483323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:04.018200 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095652, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7471344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744535 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095679, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7521682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744658 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095636, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7441344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744686 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095698, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7550108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744737 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095676, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.751252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744754 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095642, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7452862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744780 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095637, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7447872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744792 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095662, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.749124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744822 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095660, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7487683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744835 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095693, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7546108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-17 04:39:13.744857 | orchestrator | 2026-02-17 04:39:13.744870 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-17 04:39:13.744882 | orchestrator | Tuesday 17 February 2026 04:39:11 +0000 (0:00:24.075) 0:00:47.343 ****** 2026-02-17 04:39:13.744894 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 04:39:13.744906 | orchestrator | 2026-02-17 04:39:13.744918 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-17 04:39:13.744929 | orchestrator | Tuesday 17 February 2026 04:39:12 +0000 (0:00:00.729) 0:00:48.072 ****** 2026-02-17 04:39:13.744940 | orchestrator | [WARNING]: Skipped 2026-02-17 04:39:13.744951 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.744963 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-17 04:39:13.744974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.744985 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-17 04:39:13.745110 | orchestrator | [WARNING]: Skipped 2026-02-17 04:39:13.745129 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745146 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-17 04:39:13.745164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745181 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-17 04:39:13.745199 | orchestrator | [WARNING]: Skipped 2026-02-17 04:39:13.745216 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745234 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-17 04:39:13.745252 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745272 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-17 04:39:13.745290 | orchestrator | [WARNING]: Skipped 2026-02-17 04:39:13.745312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745331 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-17 04:39:13.745347 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745360 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-17 04:39:13.745372 | orchestrator | [WARNING]: Skipped 2026-02-17 04:39:13.745385 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745397 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-17 04:39:13.745410 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745431 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-17 04:39:13.745443 | orchestrator | [WARNING]: Skipped 2026-02-17 04:39:13.745454 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745465 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-17 04:39:13.745476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745487 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-17 04:39:13.745498 | orchestrator | [WARNING]: Skipped 2026-02-17 04:39:13.745509 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745519 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-17 04:39:13.745530 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-17 04:39:13.745541 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-17 04:39:13.745551 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 04:39:13.745562 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:39:13.745573 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-17 04:39:13.745584 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-17 04:39:13.745604 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-17 04:39:13.745616 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-17 04:39:13.745627 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-17 04:39:13.745638 | orchestrator | 2026-02-17 04:39:13.745664 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-17 04:39:43.766504 | orchestrator | Tuesday 17 February 2026 04:39:13 +0000 (0:00:01.720) 0:00:49.793 ****** 2026-02-17 04:39:43.766611 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-17 04:39:43.766630 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:39:43.766644 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-17 04:39:43.766683 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:43.766695 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-17 04:39:43.766706 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:43.766717 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-17 04:39:43.766729 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:43.766740 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-17 04:39:43.766751 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:43.766762 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-17 04:39:43.766773 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:43.766784 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-17 04:39:43.766796 | orchestrator | 2026-02-17 04:39:43.766808 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-17 04:39:43.766819 | orchestrator | Tuesday 17 February 2026 04:39:30 +0000 (0:00:16.345) 0:01:06.138 ****** 2026-02-17 04:39:43.766830 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-17 04:39:43.766841 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-17 04:39:43.766852 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:39:43.766863 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:43.766874 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-17 04:39:43.766885 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:43.766896 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-17 04:39:43.766906 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:43.766917 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-17 04:39:43.766928 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:43.766939 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-17 04:39:43.766950 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:43.766961 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-17 04:39:43.766973 | orchestrator | 2026-02-17 04:39:43.766992 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-17 04:39:43.767035 | orchestrator | Tuesday 17 February 2026 04:39:32 +0000 (0:00:02.730) 0:01:08.869 ****** 2026-02-17 04:39:43.767057 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-17 04:39:43.767075 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:39:43.767088 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-17 04:39:43.767127 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:43.767141 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-17 04:39:43.767153 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:43.767166 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-17 04:39:43.767192 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:43.767205 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-17 04:39:43.767218 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-17 04:39:43.767230 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:43.767242 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-17 04:39:43.767255 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:43.767267 | orchestrator | 2026-02-17 04:39:43.767280 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-17 04:39:43.767292 | orchestrator | Tuesday 17 February 2026 04:39:34 +0000 (0:00:01.768) 0:01:10.637 ****** 2026-02-17 04:39:43.767305 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 04:39:43.767317 | orchestrator | 2026-02-17 04:39:43.767329 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-17 04:39:43.767343 | orchestrator | Tuesday 17 February 2026 04:39:35 +0000 (0:00:00.692) 0:01:11.329 ****** 2026-02-17 04:39:43.767354 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:39:43.767367 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:39:43.767380 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:43.767392 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:43.767423 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:43.767435 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:43.767446 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:43.767457 | orchestrator | 2026-02-17 04:39:43.767468 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-17 04:39:43.767479 | orchestrator | Tuesday 17 February 2026 04:39:36 +0000 (0:00:00.764) 0:01:12.094 ****** 2026-02-17 04:39:43.767489 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:39:43.767501 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:43.767511 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:43.767522 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:43.767533 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:39:43.767544 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:39:43.767555 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:39:43.767565 | orchestrator | 2026-02-17 04:39:43.767577 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-17 04:39:43.767588 | orchestrator | Tuesday 17 February 2026 04:39:38 +0000 (0:00:02.176) 0:01:14.271 ****** 2026-02-17 04:39:43.767599 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-17 04:39:43.767610 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-17 04:39:43.767621 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-17 04:39:43.767632 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-17 04:39:43.767643 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:39:43.767654 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:39:43.767664 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:43.767675 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:43.767686 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-17 04:39:43.767704 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:43.767715 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-17 04:39:43.767773 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:43.767788 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-17 04:39:43.767799 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:43.767810 | orchestrator | 2026-02-17 04:39:43.767821 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-17 04:39:43.767832 | orchestrator | Tuesday 17 February 2026 04:39:39 +0000 (0:00:01.480) 0:01:15.752 ****** 2026-02-17 04:39:43.767843 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-17 04:39:43.767854 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:39:43.767865 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-17 04:39:43.767876 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:43.767887 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-17 04:39:43.767898 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:43.767909 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-17 04:39:43.767920 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:43.767931 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-17 04:39:43.767941 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:43.767953 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-17 04:39:43.767963 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:43.767974 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-17 04:39:43.767985 | orchestrator | 2026-02-17 04:39:43.767996 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-17 04:39:43.768039 | orchestrator | Tuesday 17 February 2026 04:39:41 +0000 (0:00:01.430) 0:01:17.182 ****** 2026-02-17 04:39:43.768055 | orchestrator | [WARNING]: Skipped 2026-02-17 04:39:43.768068 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-17 04:39:43.768079 | orchestrator | due to this access issue: 2026-02-17 04:39:43.768090 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-17 04:39:43.768101 | orchestrator | not a directory 2026-02-17 04:39:43.768112 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 04:39:43.768123 | orchestrator | 2026-02-17 04:39:43.768134 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-17 04:39:43.768145 | orchestrator | Tuesday 17 February 2026 04:39:42 +0000 (0:00:01.107) 0:01:18.289 ****** 2026-02-17 04:39:43.768156 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:39:43.768167 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:39:43.768178 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:43.768189 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:43.768200 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:43.768210 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:43.768221 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:43.768232 | orchestrator | 2026-02-17 04:39:43.768243 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-17 04:39:43.768254 | orchestrator | Tuesday 17 February 2026 04:39:43 +0000 (0:00:00.896) 0:01:19.185 ****** 2026-02-17 04:39:43.768265 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:39:43.768276 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:39:43.768286 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:39:43.768313 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:39:46.301346 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:39:46.301459 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:39:46.301474 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:39:46.301487 | orchestrator | 2026-02-17 04:39:46.301500 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-17 04:39:46.301513 | orchestrator | Tuesday 17 February 2026 04:39:43 +0000 (0:00:00.870) 0:01:20.056 ****** 2026-02-17 04:39:46.301527 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-17 04:39:46.301545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:39:46.301558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:39:46.301570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:39:46.301598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:39:46.301611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:39:46.301689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:39:46.301704 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-17 04:39:46.301716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:46.301728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:46.301741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:39:46.301753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:46.301771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:39:46.301784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:39:46.301813 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:39:50.004992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:50.005203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:50.005231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:39:50.005250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:50.005270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:39:50.005312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-17 04:39:50.005364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-17 04:39:50.005400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:39:50.005414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:39:50.005425 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:50.005437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-17 04:39:50.005454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:50.005475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:50.005487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 04:39:50.005501 | orchestrator | 2026-02-17 04:39:50.005515 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-17 04:39:50.005529 | orchestrator | Tuesday 17 February 2026 04:39:48 +0000 (0:00:04.043) 0:01:24.100 ****** 2026-02-17 04:39:50.005541 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-17 04:39:50.005555 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:39:50.005569 | orchestrator | 2026-02-17 04:39:50.005588 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-17 04:41:26.921857 | orchestrator | Tuesday 17 February 2026 04:39:49 +0000 (0:00:01.256) 0:01:25.357 ****** 2026-02-17 04:41:26.921977 | orchestrator | 2026-02-17 04:41:26.921996 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-17 04:41:26.922009 | orchestrator | Tuesday 17 February 2026 04:39:49 +0000 (0:00:00.262) 0:01:25.619 ****** 2026-02-17 04:41:26.922112 | orchestrator | 2026-02-17 04:41:26.922122 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-17 04:41:26.922129 | orchestrator | Tuesday 17 February 2026 04:39:49 +0000 (0:00:00.070) 0:01:25.689 ****** 2026-02-17 04:41:26.922137 | orchestrator | 2026-02-17 04:41:26.922144 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-17 04:41:26.922152 | orchestrator | Tuesday 17 February 2026 04:39:49 +0000 (0:00:00.069) 0:01:25.758 ****** 2026-02-17 04:41:26.922159 | orchestrator | 2026-02-17 04:41:26.922166 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-17 04:41:26.922174 | orchestrator | Tuesday 17 February 2026 04:39:49 +0000 (0:00:00.068) 0:01:25.826 ****** 2026-02-17 04:41:26.922181 | orchestrator | 2026-02-17 04:41:26.922188 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-17 04:41:26.922195 | orchestrator | Tuesday 17 February 2026 04:39:49 +0000 (0:00:00.066) 0:01:25.893 ****** 2026-02-17 04:41:26.922203 | orchestrator | 2026-02-17 04:41:26.922210 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-17 04:41:26.922217 | orchestrator | Tuesday 17 February 2026 04:39:49 +0000 (0:00:00.064) 0:01:25.958 ****** 2026-02-17 04:41:26.922225 | orchestrator | 2026-02-17 04:41:26.922232 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-17 04:41:26.922239 | orchestrator | Tuesday 17 February 2026 04:39:49 +0000 (0:00:00.094) 0:01:26.053 ****** 2026-02-17 04:41:26.922246 | orchestrator | changed: [testbed-manager] 2026-02-17 04:41:26.922255 | orchestrator | 2026-02-17 04:41:26.922262 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-17 04:41:26.922270 | orchestrator | Tuesday 17 February 2026 04:40:12 +0000 (0:00:22.363) 0:01:48.416 ****** 2026-02-17 04:41:26.922277 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:41:26.922284 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:41:26.922292 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:41:26.922300 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:41:26.922339 | orchestrator | changed: [testbed-manager] 2026-02-17 04:41:26.922355 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:41:26.922368 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:41:26.922381 | orchestrator | 2026-02-17 04:41:26.922393 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-17 04:41:26.922407 | orchestrator | Tuesday 17 February 2026 04:40:26 +0000 (0:00:13.685) 0:02:02.101 ****** 2026-02-17 04:41:26.922420 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:41:26.922434 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:41:26.922444 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:41:26.922452 | orchestrator | 2026-02-17 04:41:26.922461 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-17 04:41:26.922470 | orchestrator | Tuesday 17 February 2026 04:40:36 +0000 (0:00:10.606) 0:02:12.708 ****** 2026-02-17 04:41:26.922478 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:41:26.922486 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:41:26.922494 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:41:26.922502 | orchestrator | 2026-02-17 04:41:26.922510 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-17 04:41:26.922519 | orchestrator | Tuesday 17 February 2026 04:40:42 +0000 (0:00:05.664) 0:02:18.373 ****** 2026-02-17 04:41:26.922527 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:41:26.922535 | orchestrator | changed: [testbed-manager] 2026-02-17 04:41:26.922542 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:41:26.922551 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:41:26.922559 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:41:26.922567 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:41:26.922587 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:41:26.922595 | orchestrator | 2026-02-17 04:41:26.922604 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-17 04:41:26.922612 | orchestrator | Tuesday 17 February 2026 04:40:56 +0000 (0:00:14.484) 0:02:32.857 ****** 2026-02-17 04:41:26.922620 | orchestrator | changed: [testbed-manager] 2026-02-17 04:41:26.922628 | orchestrator | 2026-02-17 04:41:26.922636 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-17 04:41:26.922645 | orchestrator | Tuesday 17 February 2026 04:41:05 +0000 (0:00:08.346) 0:02:41.203 ****** 2026-02-17 04:41:26.922653 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:41:26.922661 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:41:26.922670 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:41:26.922678 | orchestrator | 2026-02-17 04:41:26.922686 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-17 04:41:26.922694 | orchestrator | Tuesday 17 February 2026 04:41:15 +0000 (0:00:10.488) 0:02:51.692 ****** 2026-02-17 04:41:26.922703 | orchestrator | changed: [testbed-manager] 2026-02-17 04:41:26.922711 | orchestrator | 2026-02-17 04:41:26.922719 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-17 04:41:26.922727 | orchestrator | Tuesday 17 February 2026 04:41:21 +0000 (0:00:05.558) 0:02:57.250 ****** 2026-02-17 04:41:26.922736 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:41:26.922744 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:41:26.922752 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:41:26.922761 | orchestrator | 2026-02-17 04:41:26.922769 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:41:26.922778 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-17 04:41:26.922787 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-17 04:41:26.922809 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-17 04:41:26.922817 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-17 04:41:26.922833 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-17 04:41:26.922840 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-17 04:41:26.922847 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-17 04:41:26.922855 | orchestrator | 2026-02-17 04:41:26.922862 | orchestrator | 2026-02-17 04:41:26.922869 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:41:26.922877 | orchestrator | Tuesday 17 February 2026 04:41:26 +0000 (0:00:05.244) 0:03:02.494 ****** 2026-02-17 04:41:26.922889 | orchestrator | =============================================================================== 2026-02-17 04:41:26.922901 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.08s 2026-02-17 04:41:26.922913 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.36s 2026-02-17 04:41:26.922925 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.35s 2026-02-17 04:41:26.922937 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.48s 2026-02-17 04:41:26.922949 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.69s 2026-02-17 04:41:26.922962 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.61s 2026-02-17 04:41:26.922975 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.49s 2026-02-17 04:41:26.922987 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.35s 2026-02-17 04:41:26.922999 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.16s 2026-02-17 04:41:26.923007 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.66s 2026-02-17 04:41:26.923014 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.56s 2026-02-17 04:41:26.923021 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.29s 2026-02-17 04:41:26.923053 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.24s 2026-02-17 04:41:26.923062 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.04s 2026-02-17 04:41:26.923069 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.73s 2026-02-17 04:41:26.923076 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.70s 2026-02-17 04:41:26.923083 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.34s 2026-02-17 04:41:26.923091 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.18s 2026-02-17 04:41:26.923098 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.80s 2026-02-17 04:41:26.923105 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.77s 2026-02-17 04:41:29.275379 | orchestrator | 2026-02-17 04:41:29 | INFO  | Task bce57d82-ef6e-493b-baef-88010a398aeb (grafana) was prepared for execution. 2026-02-17 04:41:29.277022 | orchestrator | 2026-02-17 04:41:29 | INFO  | It takes a moment until task bce57d82-ef6e-493b-baef-88010a398aeb (grafana) has been started and output is visible here. 2026-02-17 04:41:38.892165 | orchestrator | 2026-02-17 04:41:38.892279 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:41:38.892296 | orchestrator | 2026-02-17 04:41:38.892308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:41:38.892319 | orchestrator | Tuesday 17 February 2026 04:41:33 +0000 (0:00:00.260) 0:00:00.260 ****** 2026-02-17 04:41:38.892329 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:41:38.892362 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:41:38.892373 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:41:38.892383 | orchestrator | 2026-02-17 04:41:38.892393 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:41:38.892403 | orchestrator | Tuesday 17 February 2026 04:41:33 +0000 (0:00:00.317) 0:00:00.578 ****** 2026-02-17 04:41:38.892412 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-17 04:41:38.892423 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-17 04:41:38.892432 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-17 04:41:38.892442 | orchestrator | 2026-02-17 04:41:38.892451 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-17 04:41:38.892461 | orchestrator | 2026-02-17 04:41:38.892471 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-17 04:41:38.892480 | orchestrator | Tuesday 17 February 2026 04:41:34 +0000 (0:00:00.476) 0:00:01.055 ****** 2026-02-17 04:41:38.892490 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:41:38.892501 | orchestrator | 2026-02-17 04:41:38.892511 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-17 04:41:38.892520 | orchestrator | Tuesday 17 February 2026 04:41:34 +0000 (0:00:00.555) 0:00:01.611 ****** 2026-02-17 04:41:38.892533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:38.892547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:38.892558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:38.892568 | orchestrator | 2026-02-17 04:41:38.892578 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-17 04:41:38.892588 | orchestrator | Tuesday 17 February 2026 04:41:35 +0000 (0:00:00.848) 0:00:02.459 ****** 2026-02-17 04:41:38.892597 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-17 04:41:38.892608 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-17 04:41:38.892618 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:41:38.892635 | orchestrator | 2026-02-17 04:41:38.892647 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-17 04:41:38.892658 | orchestrator | Tuesday 17 February 2026 04:41:36 +0000 (0:00:00.806) 0:00:03.266 ****** 2026-02-17 04:41:38.892682 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:41:38.892694 | orchestrator | 2026-02-17 04:41:38.892705 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-17 04:41:38.892716 | orchestrator | Tuesday 17 February 2026 04:41:37 +0000 (0:00:00.551) 0:00:03.818 ****** 2026-02-17 04:41:38.892744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:38.892757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:38.892769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:38.892780 | orchestrator | 2026-02-17 04:41:38.892791 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-17 04:41:38.892802 | orchestrator | Tuesday 17 February 2026 04:41:38 +0000 (0:00:01.286) 0:00:05.104 ****** 2026-02-17 04:41:38.892813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 04:41:38.892825 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:41:38.892837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 04:41:38.892855 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:41:38.892880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 04:41:45.659969 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:41:45.660107 | orchestrator | 2026-02-17 04:41:45.660120 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-17 04:41:45.660128 | orchestrator | Tuesday 17 February 2026 04:41:38 +0000 (0:00:00.560) 0:00:05.664 ****** 2026-02-17 04:41:45.660137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 04:41:45.660146 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:41:45.660153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 04:41:45.660159 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:41:45.660167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-17 04:41:45.660174 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:41:45.660181 | orchestrator | 2026-02-17 04:41:45.660188 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-17 04:41:45.660213 | orchestrator | Tuesday 17 February 2026 04:41:39 +0000 (0:00:00.594) 0:00:06.259 ****** 2026-02-17 04:41:45.660220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:45.660239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:45.660262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:45.660270 | orchestrator | 2026-02-17 04:41:45.660276 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-17 04:41:45.660282 | orchestrator | Tuesday 17 February 2026 04:41:40 +0000 (0:00:01.274) 0:00:07.533 ****** 2026-02-17 04:41:45.660289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:45.660296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:45.660303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:41:45.660315 | orchestrator | 2026-02-17 04:41:45.660322 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-17 04:41:45.660328 | orchestrator | Tuesday 17 February 2026 04:41:42 +0000 (0:00:01.573) 0:00:09.107 ****** 2026-02-17 04:41:45.660334 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:41:45.660340 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:41:45.660346 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:41:45.660353 | orchestrator | 2026-02-17 04:41:45.660359 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-17 04:41:45.660366 | orchestrator | Tuesday 17 February 2026 04:41:42 +0000 (0:00:00.372) 0:00:09.480 ****** 2026-02-17 04:41:45.660372 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-17 04:41:45.660381 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-17 04:41:45.660387 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-17 04:41:45.660394 | orchestrator | 2026-02-17 04:41:45.660401 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-17 04:41:45.660412 | orchestrator | Tuesday 17 February 2026 04:41:43 +0000 (0:00:01.205) 0:00:10.685 ****** 2026-02-17 04:41:45.660419 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-17 04:41:45.660425 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-17 04:41:45.660432 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-17 04:41:45.660438 | orchestrator | 2026-02-17 04:41:45.660445 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-17 04:41:45.660458 | orchestrator | Tuesday 17 February 2026 04:41:45 +0000 (0:00:01.743) 0:00:12.428 ****** 2026-02-17 04:41:52.104185 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:41:52.104298 | orchestrator | 2026-02-17 04:41:52.104316 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-17 04:41:52.104329 | orchestrator | Tuesday 17 February 2026 04:41:46 +0000 (0:00:00.757) 0:00:13.186 ****** 2026-02-17 04:41:52.104341 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-17 04:41:52.104357 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-17 04:41:52.104378 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:41:52.104390 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:41:52.104401 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:41:52.104412 | orchestrator | 2026-02-17 04:41:52.104424 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-17 04:41:52.104435 | orchestrator | Tuesday 17 February 2026 04:41:47 +0000 (0:00:00.690) 0:00:13.876 ****** 2026-02-17 04:41:52.104447 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:41:52.104458 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:41:52.104469 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:41:52.104480 | orchestrator | 2026-02-17 04:41:52.104491 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-17 04:41:52.104502 | orchestrator | Tuesday 17 February 2026 04:41:47 +0000 (0:00:00.369) 0:00:14.246 ****** 2026-02-17 04:41:52.104517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095429, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6891334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095429, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6891334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095429, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6891334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095495, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7061336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095495, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7061336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095495, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7061336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095441, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6931334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095441, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6931334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095441, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6931334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095497, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7091336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095497, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7091336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:52.104726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095497, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7091336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.036703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095466, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6991346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.037745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095466, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6991346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.037794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095466, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6991346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.037817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1095486, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7042863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.037856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1095486, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7042863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.037879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1095486, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7042863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.037927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095311, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6481657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.037961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095311, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6481657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.037984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095311, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6481657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.038003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095433, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6901333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.038106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095433, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6901333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.038139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095433, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6901333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:56.038175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095444, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6931334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095444, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6931334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095444, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6931334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095474, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7011335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095474, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7011335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095474, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7011335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095491, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7051337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095491, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7051337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095491, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7051337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095437, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6924853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095437, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6924853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095437, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6924853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1095482, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.703435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:41:59.805683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1095482, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.703435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1095482, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.703435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095469, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7001336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095469, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7001336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095469, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7001336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095461, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.698897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095461, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.698897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095461, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.698897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095451, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6976068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095451, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6976068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095451, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6976068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095476, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7027206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095476, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7027206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:03.727466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095476, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7027206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095446, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6951334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095446, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6951334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095446, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.6951334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095488, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7042863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095488, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7042863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095488, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7042863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095627, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7434452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095627, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7434452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095627, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7434452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095536, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7216814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095536, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7216814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095536, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7216814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:07.710948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1095516, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7133572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1095516, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7133572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1095516, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7133572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1095558, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7239373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1095558, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7239373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1095558, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7239373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1095508, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7107763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1095508, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7107763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1095508, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7107763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095596, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7341342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095596, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7341342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095596, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7341342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.752982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095560, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7319496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:11.753015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095560, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7319496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.722901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095560, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7319496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095600, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.735758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095600, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.735758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095600, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.735758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095625, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7411342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095625, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7411342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095625, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7411342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095593, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.733134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095593, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.733134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095593, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.733134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095553, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7231255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095553, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7231255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:15.723251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095553, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7231255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.323829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095526, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7173018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.323964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095526, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7173018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.323993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095526, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7173018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095548, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7216814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095548, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7216814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095548, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7216814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1095518, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7159715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1095518, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7159715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1095556, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.723134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1095518, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7159715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1095556, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.723134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1095556, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.723134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:19.324220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095619, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7411342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095619, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7411342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095610, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7381742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095619, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7411342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095610, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7381742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1095510, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7111337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1095510, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7111337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095610, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7381742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1095513, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7133572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1095513, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7133572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1095510, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7111337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095589, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7330663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095589, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7330663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:42:23.841908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1095513, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7133572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:44:06.009638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095607, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.736295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:44:06.009808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095607, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.736295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:44:06.009840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095589, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.7330663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:44:06.009862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095607, 'dev': 128, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771296003.736295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-17 04:44:06.009881 | orchestrator | 2026-02-17 04:44:06.009903 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-17 04:44:06.009917 | orchestrator | Tuesday 17 February 2026 04:42:25 +0000 (0:00:37.728) 0:00:51.974 ****** 2026-02-17 04:44:06.009929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:44:06.009985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:44:06.009998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-17 04:44:06.010010 | orchestrator | 2026-02-17 04:44:06.010146 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-17 04:44:06.010169 | orchestrator | Tuesday 17 February 2026 04:42:26 +0000 (0:00:01.148) 0:00:53.123 ****** 2026-02-17 04:44:06.010187 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:44:06.010207 | orchestrator | 2026-02-17 04:44:06.010226 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-17 04:44:06.010246 | orchestrator | Tuesday 17 February 2026 04:42:28 +0000 (0:00:02.295) 0:00:55.419 ****** 2026-02-17 04:44:06.010264 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:44:06.010283 | orchestrator | 2026-02-17 04:44:06.010300 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-17 04:44:06.010321 | orchestrator | Tuesday 17 February 2026 04:42:30 +0000 (0:00:02.345) 0:00:57.764 ****** 2026-02-17 04:44:06.010340 | orchestrator | 2026-02-17 04:44:06.010359 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-17 04:44:06.010375 | orchestrator | Tuesday 17 February 2026 04:42:31 +0000 (0:00:00.084) 0:00:57.849 ****** 2026-02-17 04:44:06.010388 | orchestrator | 2026-02-17 04:44:06.010402 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-17 04:44:06.010414 | orchestrator | Tuesday 17 February 2026 04:42:31 +0000 (0:00:00.075) 0:00:57.924 ****** 2026-02-17 04:44:06.010428 | orchestrator | 2026-02-17 04:44:06.010440 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-17 04:44:06.010451 | orchestrator | Tuesday 17 February 2026 04:42:31 +0000 (0:00:00.081) 0:00:58.006 ****** 2026-02-17 04:44:06.010462 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:44:06.010473 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:44:06.010484 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:44:06.010495 | orchestrator | 2026-02-17 04:44:06.010506 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-17 04:44:06.010517 | orchestrator | Tuesday 17 February 2026 04:42:33 +0000 (0:00:02.170) 0:01:00.177 ****** 2026-02-17 04:44:06.010540 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:44:06.010552 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:44:06.010563 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-17 04:44:06.010575 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-17 04:44:06.010586 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-17 04:44:06.010597 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-17 04:44:06.010609 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:44:06.010621 | orchestrator | 2026-02-17 04:44:06.010632 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-17 04:44:06.010643 | orchestrator | Tuesday 17 February 2026 04:43:23 +0000 (0:00:50.515) 0:01:50.692 ****** 2026-02-17 04:44:06.010654 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:44:06.010665 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:44:06.010676 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:44:06.010686 | orchestrator | 2026-02-17 04:44:06.010697 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-17 04:44:06.010708 | orchestrator | Tuesday 17 February 2026 04:44:00 +0000 (0:00:36.990) 0:02:27.683 ****** 2026-02-17 04:44:06.010719 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:44:06.010730 | orchestrator | 2026-02-17 04:44:06.010741 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-17 04:44:06.010752 | orchestrator | Tuesday 17 February 2026 04:44:03 +0000 (0:00:02.222) 0:02:29.905 ****** 2026-02-17 04:44:06.010763 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:44:06.010774 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:44:06.010785 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:44:06.010796 | orchestrator | 2026-02-17 04:44:06.010807 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-17 04:44:06.010818 | orchestrator | Tuesday 17 February 2026 04:44:03 +0000 (0:00:00.320) 0:02:30.226 ****** 2026-02-17 04:44:06.010831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-17 04:44:06.010856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-17 04:44:06.634697 | orchestrator | 2026-02-17 04:44:06.635693 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-17 04:44:06.635751 | orchestrator | Tuesday 17 February 2026 04:44:05 +0000 (0:00:02.549) 0:02:32.776 ****** 2026-02-17 04:44:06.635764 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:44:06.635777 | orchestrator | 2026-02-17 04:44:06.635789 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:44:06.635801 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 04:44:06.635814 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 04:44:06.635844 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 04:44:06.635856 | orchestrator | 2026-02-17 04:44:06.635867 | orchestrator | 2026-02-17 04:44:06.635878 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:44:06.635911 | orchestrator | Tuesday 17 February 2026 04:44:06 +0000 (0:00:00.286) 0:02:33.062 ****** 2026-02-17 04:44:06.635922 | orchestrator | =============================================================================== 2026-02-17 04:44:06.635933 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.52s 2026-02-17 04:44:06.635944 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.73s 2026-02-17 04:44:06.635956 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.99s 2026-02-17 04:44:06.635966 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.55s 2026-02-17 04:44:06.635977 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.35s 2026-02-17 04:44:06.635988 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.30s 2026-02-17 04:44:06.635999 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.22s 2026-02-17 04:44:06.636009 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.17s 2026-02-17 04:44:06.636020 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.74s 2026-02-17 04:44:06.636031 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.57s 2026-02-17 04:44:06.636041 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-02-17 04:44:06.636073 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.27s 2026-02-17 04:44:06.636085 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.21s 2026-02-17 04:44:06.636096 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.15s 2026-02-17 04:44:06.636107 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.85s 2026-02-17 04:44:06.636118 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.81s 2026-02-17 04:44:06.636128 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2026-02-17 04:44:06.636139 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2026-02-17 04:44:06.636150 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.59s 2026-02-17 04:44:06.636161 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.56s 2026-02-17 04:44:06.944530 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-17 04:44:06.954125 | orchestrator | + set -e 2026-02-17 04:44:06.954195 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 04:44:06.954210 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 04:44:06.954223 | orchestrator | ++ INTERACTIVE=false 2026-02-17 04:44:06.954235 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 04:44:06.954246 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 04:44:06.954257 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 04:44:06.954279 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 04:44:06.954412 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 04:44:06.954428 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 04:44:06.954439 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 04:44:06.954451 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 04:44:06.954464 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 04:44:06.954476 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 04:44:06.954488 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 04:44:06.954500 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 04:44:06.954512 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 04:44:06.954527 | orchestrator | ++ export ARA=false 2026-02-17 04:44:06.954539 | orchestrator | ++ ARA=false 2026-02-17 04:44:06.954551 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 04:44:06.954562 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 04:44:06.954573 | orchestrator | ++ export TEMPEST=false 2026-02-17 04:44:06.954585 | orchestrator | ++ TEMPEST=false 2026-02-17 04:44:06.954596 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 04:44:06.954607 | orchestrator | ++ IS_ZUUL=true 2026-02-17 04:44:06.954618 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 04:44:06.954629 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 04:44:06.954673 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 04:44:06.954685 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 04:44:06.954696 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 04:44:06.954707 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 04:44:06.954719 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 04:44:06.954730 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 04:44:06.954741 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 04:44:06.954753 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 04:44:06.956232 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-17 04:44:07.016392 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 04:44:07.016496 | orchestrator | + osism apply clusterapi 2026-02-17 04:44:09.031974 | orchestrator | 2026-02-17 04:44:09 | INFO  | Task 189ec42f-a5e0-46e7-9ec0-463c5bb84611 (clusterapi) was prepared for execution. 2026-02-17 04:44:09.032166 | orchestrator | 2026-02-17 04:44:09 | INFO  | It takes a moment until task 189ec42f-a5e0-46e7-9ec0-463c5bb84611 (clusterapi) has been started and output is visible here. 2026-02-17 04:45:03.767332 | orchestrator | 2026-02-17 04:45:03.767462 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-17 04:45:03.767479 | orchestrator | 2026-02-17 04:45:03.767492 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-17 04:45:03.767503 | orchestrator | Tuesday 17 February 2026 04:44:13 +0000 (0:00:00.191) 0:00:00.191 ****** 2026-02-17 04:45:03.767515 | orchestrator | included: cert_manager for testbed-manager 2026-02-17 04:45:03.767527 | orchestrator | 2026-02-17 04:45:03.767538 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-17 04:45:03.767549 | orchestrator | Tuesday 17 February 2026 04:44:13 +0000 (0:00:00.286) 0:00:00.478 ****** 2026-02-17 04:45:03.767560 | orchestrator | changed: [testbed-manager] 2026-02-17 04:45:03.767573 | orchestrator | 2026-02-17 04:45:03.767584 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-17 04:45:03.767595 | orchestrator | Tuesday 17 February 2026 04:44:19 +0000 (0:00:05.348) 0:00:05.826 ****** 2026-02-17 04:45:03.767606 | orchestrator | changed: [testbed-manager] 2026-02-17 04:45:03.767617 | orchestrator | 2026-02-17 04:45:03.767646 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-17 04:45:03.767657 | orchestrator | 2026-02-17 04:45:03.767668 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-17 04:45:03.767679 | orchestrator | Tuesday 17 February 2026 04:44:42 +0000 (0:00:23.849) 0:00:29.676 ****** 2026-02-17 04:45:03.767690 | orchestrator | ok: [testbed-manager] 2026-02-17 04:45:03.767702 | orchestrator | 2026-02-17 04:45:03.767713 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-17 04:45:03.767724 | orchestrator | Tuesday 17 February 2026 04:44:44 +0000 (0:00:01.177) 0:00:30.853 ****** 2026-02-17 04:45:03.767735 | orchestrator | ok: [testbed-manager] 2026-02-17 04:45:03.767746 | orchestrator | 2026-02-17 04:45:03.767757 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-17 04:45:03.767768 | orchestrator | Tuesday 17 February 2026 04:44:44 +0000 (0:00:00.156) 0:00:31.009 ****** 2026-02-17 04:45:03.767779 | orchestrator | ok: [testbed-manager] 2026-02-17 04:45:03.767790 | orchestrator | 2026-02-17 04:45:03.767801 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-17 04:45:03.767812 | orchestrator | Tuesday 17 February 2026 04:45:00 +0000 (0:00:16.638) 0:00:47.648 ****** 2026-02-17 04:45:03.767823 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:45:03.767834 | orchestrator | 2026-02-17 04:45:03.767847 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-17 04:45:03.767859 | orchestrator | Tuesday 17 February 2026 04:45:01 +0000 (0:00:00.173) 0:00:47.821 ****** 2026-02-17 04:45:03.767872 | orchestrator | changed: [testbed-manager] 2026-02-17 04:45:03.767884 | orchestrator | 2026-02-17 04:45:03.767897 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:45:03.767911 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 04:45:03.767949 | orchestrator | 2026-02-17 04:45:03.767962 | orchestrator | 2026-02-17 04:45:03.767975 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:45:03.767988 | orchestrator | Tuesday 17 February 2026 04:45:03 +0000 (0:00:02.328) 0:00:50.150 ****** 2026-02-17 04:45:03.768001 | orchestrator | =============================================================================== 2026-02-17 04:45:03.768013 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 23.85s 2026-02-17 04:45:03.768026 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.64s 2026-02-17 04:45:03.768038 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.35s 2026-02-17 04:45:03.768051 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.33s 2026-02-17 04:45:03.768100 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.18s 2026-02-17 04:45:03.768113 | orchestrator | Include cert_manager role ----------------------------------------------- 0.29s 2026-02-17 04:45:03.768126 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.17s 2026-02-17 04:45:03.768139 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-02-17 04:45:04.150248 | orchestrator | + osism apply magnum 2026-02-17 04:45:06.356892 | orchestrator | 2026-02-17 04:45:06 | INFO  | Task 4755b2ae-0e9a-477f-9f44-1beada3ffd80 (magnum) was prepared for execution. 2026-02-17 04:45:06.357042 | orchestrator | 2026-02-17 04:45:06 | INFO  | It takes a moment until task 4755b2ae-0e9a-477f-9f44-1beada3ffd80 (magnum) has been started and output is visible here. 2026-02-17 04:45:48.514260 | orchestrator | 2026-02-17 04:45:48.514392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:45:48.514410 | orchestrator | 2026-02-17 04:45:48.514422 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:45:48.514434 | orchestrator | Tuesday 17 February 2026 04:45:10 +0000 (0:00:00.238) 0:00:00.238 ****** 2026-02-17 04:45:48.514445 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:45:48.514458 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:45:48.514469 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:45:48.514480 | orchestrator | 2026-02-17 04:45:48.514491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:45:48.514502 | orchestrator | Tuesday 17 February 2026 04:45:10 +0000 (0:00:00.297) 0:00:00.535 ****** 2026-02-17 04:45:48.514514 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-17 04:45:48.514525 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-17 04:45:48.514536 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-17 04:45:48.514547 | orchestrator | 2026-02-17 04:45:48.514558 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-17 04:45:48.514570 | orchestrator | 2026-02-17 04:45:48.514581 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-17 04:45:48.514592 | orchestrator | Tuesday 17 February 2026 04:45:11 +0000 (0:00:00.406) 0:00:00.942 ****** 2026-02-17 04:45:48.514603 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:45:48.514628 | orchestrator | 2026-02-17 04:45:48.514640 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-17 04:45:48.514651 | orchestrator | Tuesday 17 February 2026 04:45:11 +0000 (0:00:00.512) 0:00:01.455 ****** 2026-02-17 04:45:48.514663 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-17 04:45:48.514675 | orchestrator | 2026-02-17 04:45:48.514688 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-17 04:45:48.514701 | orchestrator | Tuesday 17 February 2026 04:45:15 +0000 (0:00:03.472) 0:00:04.928 ****** 2026-02-17 04:45:48.514713 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-17 04:45:48.514742 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-17 04:45:48.514775 | orchestrator | 2026-02-17 04:45:48.514788 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-17 04:45:48.514800 | orchestrator | Tuesday 17 February 2026 04:45:21 +0000 (0:00:06.377) 0:00:11.305 ****** 2026-02-17 04:45:48.514814 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-17 04:45:48.514826 | orchestrator | 2026-02-17 04:45:48.514839 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-17 04:45:48.514851 | orchestrator | Tuesday 17 February 2026 04:45:25 +0000 (0:00:03.500) 0:00:14.806 ****** 2026-02-17 04:45:48.514864 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-17 04:45:48.514877 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-17 04:45:48.514889 | orchestrator | 2026-02-17 04:45:48.514902 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-17 04:45:48.514914 | orchestrator | Tuesday 17 February 2026 04:45:28 +0000 (0:00:03.949) 0:00:18.755 ****** 2026-02-17 04:45:48.514926 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-17 04:45:48.514939 | orchestrator | 2026-02-17 04:45:48.514952 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-17 04:45:48.514965 | orchestrator | Tuesday 17 February 2026 04:45:32 +0000 (0:00:03.312) 0:00:22.068 ****** 2026-02-17 04:45:48.514978 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-17 04:45:48.514990 | orchestrator | 2026-02-17 04:45:48.515002 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-17 04:45:48.515015 | orchestrator | Tuesday 17 February 2026 04:45:36 +0000 (0:00:03.795) 0:00:25.864 ****** 2026-02-17 04:45:48.515027 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:45:48.515039 | orchestrator | 2026-02-17 04:45:48.515049 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-17 04:45:48.515079 | orchestrator | Tuesday 17 February 2026 04:45:39 +0000 (0:00:03.332) 0:00:29.197 ****** 2026-02-17 04:45:48.515091 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:45:48.515102 | orchestrator | 2026-02-17 04:45:48.515113 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-17 04:45:48.515124 | orchestrator | Tuesday 17 February 2026 04:45:43 +0000 (0:00:03.961) 0:00:33.158 ****** 2026-02-17 04:45:48.515135 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:45:48.515146 | orchestrator | 2026-02-17 04:45:48.515157 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-17 04:45:48.515168 | orchestrator | Tuesday 17 February 2026 04:45:46 +0000 (0:00:03.470) 0:00:36.629 ****** 2026-02-17 04:45:48.515202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:48.515219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:48.515244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:48.515257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:48.515270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:48.515289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:56.232256 | orchestrator | 2026-02-17 04:45:56.232358 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-17 04:45:56.232373 | orchestrator | Tuesday 17 February 2026 04:45:48 +0000 (0:00:01.652) 0:00:38.281 ****** 2026-02-17 04:45:56.232404 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:45:56.232415 | orchestrator | 2026-02-17 04:45:56.232426 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-17 04:45:56.232435 | orchestrator | Tuesday 17 February 2026 04:45:48 +0000 (0:00:00.151) 0:00:38.432 ****** 2026-02-17 04:45:56.232445 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:45:56.232455 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:45:56.232464 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:45:56.232474 | orchestrator | 2026-02-17 04:45:56.232484 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-17 04:45:56.232493 | orchestrator | Tuesday 17 February 2026 04:45:48 +0000 (0:00:00.314) 0:00:38.748 ****** 2026-02-17 04:45:56.232503 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 04:45:56.232513 | orchestrator | 2026-02-17 04:45:56.232522 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-17 04:45:56.232532 | orchestrator | Tuesday 17 February 2026 04:45:49 +0000 (0:00:00.824) 0:00:39.572 ****** 2026-02-17 04:45:56.232559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:56.232574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:56.232584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:56.232612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:56.232631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:56.232647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:56.232657 | orchestrator | 2026-02-17 04:45:56.232668 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-17 04:45:56.232678 | orchestrator | Tuesday 17 February 2026 04:45:52 +0000 (0:00:02.543) 0:00:42.115 ****** 2026-02-17 04:45:56.232688 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:45:56.232698 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:45:56.232708 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:45:56.232718 | orchestrator | 2026-02-17 04:45:56.232728 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-17 04:45:56.232737 | orchestrator | Tuesday 17 February 2026 04:45:52 +0000 (0:00:00.533) 0:00:42.649 ****** 2026-02-17 04:45:56.232748 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:45:56.232758 | orchestrator | 2026-02-17 04:45:56.232768 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-17 04:45:56.232777 | orchestrator | Tuesday 17 February 2026 04:45:53 +0000 (0:00:00.602) 0:00:43.252 ****** 2026-02-17 04:45:56.232788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:56.232813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:57.175545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:45:57.175657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:57.175679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:57.175696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:45:57.175739 | orchestrator | 2026-02-17 04:45:57.175753 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-17 04:45:57.175763 | orchestrator | Tuesday 17 February 2026 04:45:56 +0000 (0:00:02.758) 0:00:46.010 ****** 2026-02-17 04:45:57.175789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:45:57.175799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:45:57.175809 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:45:57.175826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:45:57.175836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:45:57.175845 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:45:57.175854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:45:57.175876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:46:00.855857 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:46:00.855967 | orchestrator | 2026-02-17 04:46:00.855984 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-17 04:46:00.855998 | orchestrator | Tuesday 17 February 2026 04:45:57 +0000 (0:00:00.939) 0:00:46.949 ****** 2026-02-17 04:46:00.856028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:46:00.856044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:46:00.856057 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:46:00.856219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:46:00.856255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:46:00.856267 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:46:00.856300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:46:00.856320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:46:00.856332 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:46:00.856343 | orchestrator | 2026-02-17 04:46:00.856355 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-17 04:46:00.856367 | orchestrator | Tuesday 17 February 2026 04:45:58 +0000 (0:00:00.971) 0:00:47.921 ****** 2026-02-17 04:46:00.856379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:00.856401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:00.856423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:07.031692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:46:07.031854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:46:07.031873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:46:07.031912 | orchestrator | 2026-02-17 04:46:07.031926 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-17 04:46:07.031940 | orchestrator | Tuesday 17 February 2026 04:46:00 +0000 (0:00:02.707) 0:00:50.629 ****** 2026-02-17 04:46:07.031952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:07.031984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:07.032002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:07.032014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:46:07.032122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:46:07.032143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:46:07.032160 | orchestrator | 2026-02-17 04:46:07.032183 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-17 04:46:07.032201 | orchestrator | Tuesday 17 February 2026 04:46:06 +0000 (0:00:05.490) 0:00:56.120 ****** 2026-02-17 04:46:07.032236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:46:08.904470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:46:08.904576 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:46:08.904594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:46:08.904630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:46:08.904643 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:46:08.904655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-17 04:46:08.904685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 04:46:08.904697 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:46:08.904709 | orchestrator | 2026-02-17 04:46:08.904721 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-17 04:46:08.904733 | orchestrator | Tuesday 17 February 2026 04:46:07 +0000 (0:00:00.692) 0:00:56.813 ****** 2026-02-17 04:46:08.904752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:08.904772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:08.904785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-17 04:46:08.904797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:46:08.904817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:47:02.317526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-17 04:47:02.317641 | orchestrator | 2026-02-17 04:47:02.317654 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-17 04:47:02.317664 | orchestrator | Tuesday 17 February 2026 04:46:08 +0000 (0:00:01.866) 0:00:58.679 ****** 2026-02-17 04:47:02.317672 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:47:02.317681 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:47:02.317689 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:47:02.317697 | orchestrator | 2026-02-17 04:47:02.317705 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-17 04:47:02.317713 | orchestrator | Tuesday 17 February 2026 04:46:09 +0000 (0:00:00.574) 0:00:59.253 ****** 2026-02-17 04:47:02.317721 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:47:02.317729 | orchestrator | 2026-02-17 04:47:02.317737 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-17 04:47:02.317744 | orchestrator | Tuesday 17 February 2026 04:46:11 +0000 (0:00:02.120) 0:01:01.374 ****** 2026-02-17 04:47:02.317752 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:47:02.317760 | orchestrator | 2026-02-17 04:47:02.317768 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-17 04:47:02.317776 | orchestrator | Tuesday 17 February 2026 04:46:13 +0000 (0:00:02.171) 0:01:03.546 ****** 2026-02-17 04:47:02.317783 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:47:02.317791 | orchestrator | 2026-02-17 04:47:02.317799 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-17 04:47:02.317807 | orchestrator | Tuesday 17 February 2026 04:46:30 +0000 (0:00:16.565) 0:01:20.111 ****** 2026-02-17 04:47:02.317815 | orchestrator | 2026-02-17 04:47:02.317823 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-17 04:47:02.317831 | orchestrator | Tuesday 17 February 2026 04:46:30 +0000 (0:00:00.092) 0:01:20.203 ****** 2026-02-17 04:47:02.317838 | orchestrator | 2026-02-17 04:47:02.317846 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-17 04:47:02.317854 | orchestrator | Tuesday 17 February 2026 04:46:30 +0000 (0:00:00.072) 0:01:20.276 ****** 2026-02-17 04:47:02.317862 | orchestrator | 2026-02-17 04:47:02.317870 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-17 04:47:02.317878 | orchestrator | Tuesday 17 February 2026 04:46:30 +0000 (0:00:00.072) 0:01:20.349 ****** 2026-02-17 04:47:02.317886 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:47:02.317894 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:47:02.317902 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:47:02.317909 | orchestrator | 2026-02-17 04:47:02.317917 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-17 04:47:02.317925 | orchestrator | Tuesday 17 February 2026 04:46:50 +0000 (0:00:19.838) 0:01:40.187 ****** 2026-02-17 04:47:02.317933 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:47:02.317941 | orchestrator | changed: [testbed-node-2] 2026-02-17 04:47:02.317949 | orchestrator | changed: [testbed-node-1] 2026-02-17 04:47:02.317956 | orchestrator | 2026-02-17 04:47:02.317964 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:47:02.317973 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 04:47:02.317983 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 04:47:02.317997 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-17 04:47:02.318005 | orchestrator | 2026-02-17 04:47:02.318013 | orchestrator | 2026-02-17 04:47:02.318059 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:47:02.318090 | orchestrator | Tuesday 17 February 2026 04:47:01 +0000 (0:00:11.384) 0:01:51.571 ****** 2026-02-17 04:47:02.318100 | orchestrator | =============================================================================== 2026-02-17 04:47:02.318110 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.84s 2026-02-17 04:47:02.318120 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.57s 2026-02-17 04:47:02.318130 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.38s 2026-02-17 04:47:02.318139 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.38s 2026-02-17 04:47:02.318148 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.49s 2026-02-17 04:47:02.318157 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.96s 2026-02-17 04:47:02.318167 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.95s 2026-02-17 04:47:02.318191 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.80s 2026-02-17 04:47:02.318200 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.50s 2026-02-17 04:47:02.318210 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.47s 2026-02-17 04:47:02.318224 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.47s 2026-02-17 04:47:02.318234 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.33s 2026-02-17 04:47:02.318242 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.31s 2026-02-17 04:47:02.318251 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.76s 2026-02-17 04:47:02.318260 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.71s 2026-02-17 04:47:02.318269 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.54s 2026-02-17 04:47:02.318278 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.17s 2026-02-17 04:47:02.318287 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.12s 2026-02-17 04:47:02.318296 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.87s 2026-02-17 04:47:02.318305 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.65s 2026-02-17 04:47:02.891503 | orchestrator | ok: Runtime: 1:42:57.088098 2026-02-17 04:47:03.133704 | 2026-02-17 04:47:03.133846 | TASK [Deploy in a nutshell] 2026-02-17 04:47:03.668874 | orchestrator | skipping: Conditional result was False 2026-02-17 04:47:03.690297 | 2026-02-17 04:47:03.690451 | TASK [Bootstrap services] 2026-02-17 04:47:04.368559 | orchestrator | 2026-02-17 04:47:04.368797 | orchestrator | # BOOTSTRAP 2026-02-17 04:47:04.368822 | orchestrator | 2026-02-17 04:47:04.368837 | orchestrator | + set -e 2026-02-17 04:47:04.368851 | orchestrator | + echo 2026-02-17 04:47:04.368865 | orchestrator | + echo '# BOOTSTRAP' 2026-02-17 04:47:04.368883 | orchestrator | + echo 2026-02-17 04:47:04.368942 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-17 04:47:04.378734 | orchestrator | + set -e 2026-02-17 04:47:04.378791 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-17 04:47:06.564188 | orchestrator | 2026-02-17 04:47:06 | INFO  | It takes a moment until task fbedb4d5-dcd3-4ed0-8559-def247d37781 (flavor-manager) has been started and output is visible here. 2026-02-17 04:47:15.117922 | orchestrator | 2026-02-17 04:47:10 | INFO  | Flavor SCS-1L-1 created 2026-02-17 04:47:15.118390 | orchestrator | 2026-02-17 04:47:10 | INFO  | Flavor SCS-1L-1-5 created 2026-02-17 04:47:15.118430 | orchestrator | 2026-02-17 04:47:10 | INFO  | Flavor SCS-1V-2 created 2026-02-17 04:47:15.118445 | orchestrator | 2026-02-17 04:47:11 | INFO  | Flavor SCS-1V-2-5 created 2026-02-17 04:47:15.118456 | orchestrator | 2026-02-17 04:47:11 | INFO  | Flavor SCS-1V-4 created 2026-02-17 04:47:15.118468 | orchestrator | 2026-02-17 04:47:11 | INFO  | Flavor SCS-1V-4-10 created 2026-02-17 04:47:15.118480 | orchestrator | 2026-02-17 04:47:11 | INFO  | Flavor SCS-1V-8 created 2026-02-17 04:47:15.118493 | orchestrator | 2026-02-17 04:47:11 | INFO  | Flavor SCS-1V-8-20 created 2026-02-17 04:47:15.118515 | orchestrator | 2026-02-17 04:47:11 | INFO  | Flavor SCS-2V-4 created 2026-02-17 04:47:15.118526 | orchestrator | 2026-02-17 04:47:11 | INFO  | Flavor SCS-2V-4-10 created 2026-02-17 04:47:15.118538 | orchestrator | 2026-02-17 04:47:12 | INFO  | Flavor SCS-2V-8 created 2026-02-17 04:47:15.118549 | orchestrator | 2026-02-17 04:47:12 | INFO  | Flavor SCS-2V-8-20 created 2026-02-17 04:47:15.118560 | orchestrator | 2026-02-17 04:47:12 | INFO  | Flavor SCS-2V-16 created 2026-02-17 04:47:15.118571 | orchestrator | 2026-02-17 04:47:12 | INFO  | Flavor SCS-2V-16-50 created 2026-02-17 04:47:15.118582 | orchestrator | 2026-02-17 04:47:12 | INFO  | Flavor SCS-4V-8 created 2026-02-17 04:47:15.118593 | orchestrator | 2026-02-17 04:47:12 | INFO  | Flavor SCS-4V-8-20 created 2026-02-17 04:47:15.118604 | orchestrator | 2026-02-17 04:47:13 | INFO  | Flavor SCS-4V-16 created 2026-02-17 04:47:15.118614 | orchestrator | 2026-02-17 04:47:13 | INFO  | Flavor SCS-4V-16-50 created 2026-02-17 04:47:15.118625 | orchestrator | 2026-02-17 04:47:13 | INFO  | Flavor SCS-4V-32 created 2026-02-17 04:47:15.118636 | orchestrator | 2026-02-17 04:47:13 | INFO  | Flavor SCS-4V-32-100 created 2026-02-17 04:47:15.118647 | orchestrator | 2026-02-17 04:47:13 | INFO  | Flavor SCS-8V-16 created 2026-02-17 04:47:15.118658 | orchestrator | 2026-02-17 04:47:13 | INFO  | Flavor SCS-8V-16-50 created 2026-02-17 04:47:15.118670 | orchestrator | 2026-02-17 04:47:13 | INFO  | Flavor SCS-8V-32 created 2026-02-17 04:47:15.118681 | orchestrator | 2026-02-17 04:47:14 | INFO  | Flavor SCS-8V-32-100 created 2026-02-17 04:47:15.118692 | orchestrator | 2026-02-17 04:47:14 | INFO  | Flavor SCS-16V-32 created 2026-02-17 04:47:15.118703 | orchestrator | 2026-02-17 04:47:14 | INFO  | Flavor SCS-16V-32-100 created 2026-02-17 04:47:15.118714 | orchestrator | 2026-02-17 04:47:14 | INFO  | Flavor SCS-2V-4-20s created 2026-02-17 04:47:15.118725 | orchestrator | 2026-02-17 04:47:14 | INFO  | Flavor SCS-4V-8-50s created 2026-02-17 04:47:15.118736 | orchestrator | 2026-02-17 04:47:14 | INFO  | Flavor SCS-8V-32-100s created 2026-02-17 04:47:17.600739 | orchestrator | 2026-02-17 04:47:17 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-17 04:47:27.763758 | orchestrator | 2026-02-17 04:47:27 | INFO  | Task 4e1e3a3b-7d6e-4cc6-9204-067d994d9d34 (bootstrap-basic) was prepared for execution. 2026-02-17 04:47:27.763882 | orchestrator | 2026-02-17 04:47:27 | INFO  | It takes a moment until task 4e1e3a3b-7d6e-4cc6-9204-067d994d9d34 (bootstrap-basic) has been started and output is visible here. 2026-02-17 04:48:10.000940 | orchestrator | 2026-02-17 04:48:10.001094 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-17 04:48:10.001189 | orchestrator | 2026-02-17 04:48:10.001205 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 04:48:10.001218 | orchestrator | Tuesday 17 February 2026 04:47:32 +0000 (0:00:00.068) 0:00:00.068 ****** 2026-02-17 04:48:10.001229 | orchestrator | ok: [localhost] 2026-02-17 04:48:10.001242 | orchestrator | 2026-02-17 04:48:10.001253 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-17 04:48:10.001265 | orchestrator | Tuesday 17 February 2026 04:47:33 +0000 (0:00:01.856) 0:00:01.924 ****** 2026-02-17 04:48:10.001285 | orchestrator | ok: [localhost] 2026-02-17 04:48:10.001303 | orchestrator | 2026-02-17 04:48:10.001322 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-17 04:48:10.001340 | orchestrator | Tuesday 17 February 2026 04:47:40 +0000 (0:00:06.581) 0:00:08.505 ****** 2026-02-17 04:48:10.001358 | orchestrator | changed: [localhost] 2026-02-17 04:48:10.001377 | orchestrator | 2026-02-17 04:48:10.001396 | orchestrator | TASK [Create public network] *************************************************** 2026-02-17 04:48:10.001418 | orchestrator | Tuesday 17 February 2026 04:47:46 +0000 (0:00:06.271) 0:00:14.777 ****** 2026-02-17 04:48:10.001438 | orchestrator | changed: [localhost] 2026-02-17 04:48:10.001457 | orchestrator | 2026-02-17 04:48:10.001471 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-17 04:48:10.001484 | orchestrator | Tuesday 17 February 2026 04:47:51 +0000 (0:00:05.119) 0:00:19.897 ****** 2026-02-17 04:48:10.001502 | orchestrator | changed: [localhost] 2026-02-17 04:48:10.001512 | orchestrator | 2026-02-17 04:48:10.001523 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-17 04:48:10.001534 | orchestrator | Tuesday 17 February 2026 04:47:57 +0000 (0:00:06.066) 0:00:25.963 ****** 2026-02-17 04:48:10.001544 | orchestrator | changed: [localhost] 2026-02-17 04:48:10.001555 | orchestrator | 2026-02-17 04:48:10.001566 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-17 04:48:10.001576 | orchestrator | Tuesday 17 February 2026 04:48:02 +0000 (0:00:04.456) 0:00:30.419 ****** 2026-02-17 04:48:10.001587 | orchestrator | changed: [localhost] 2026-02-17 04:48:10.001598 | orchestrator | 2026-02-17 04:48:10.001609 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-17 04:48:10.001631 | orchestrator | Tuesday 17 February 2026 04:48:06 +0000 (0:00:03.845) 0:00:34.265 ****** 2026-02-17 04:48:10.001643 | orchestrator | ok: [localhost] 2026-02-17 04:48:10.001653 | orchestrator | 2026-02-17 04:48:10.001664 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:48:10.001675 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 04:48:10.001687 | orchestrator | 2026-02-17 04:48:10.001698 | orchestrator | 2026-02-17 04:48:10.001709 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:48:10.001720 | orchestrator | Tuesday 17 February 2026 04:48:09 +0000 (0:00:03.522) 0:00:37.787 ****** 2026-02-17 04:48:10.001730 | orchestrator | =============================================================================== 2026-02-17 04:48:10.001741 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.58s 2026-02-17 04:48:10.001752 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.27s 2026-02-17 04:48:10.001763 | orchestrator | Set public network to default ------------------------------------------- 6.07s 2026-02-17 04:48:10.001773 | orchestrator | Create public network --------------------------------------------------- 5.12s 2026-02-17 04:48:10.001809 | orchestrator | Create public subnet ---------------------------------------------------- 4.46s 2026-02-17 04:48:10.001821 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.85s 2026-02-17 04:48:10.001832 | orchestrator | Create manager role ----------------------------------------------------- 3.52s 2026-02-17 04:48:10.001843 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2026-02-17 04:48:12.533917 | orchestrator | 2026-02-17 04:48:12 | INFO  | It takes a moment until task 6021fa30-eb04-4b8f-9774-4f19d56fa7fb (image-manager) has been started and output is visible here. 2026-02-17 04:48:54.688807 | orchestrator | 2026-02-17 04:48:15 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-17 04:48:54.688938 | orchestrator | 2026-02-17 04:48:15 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-17 04:48:54.688956 | orchestrator | 2026-02-17 04:48:15 | INFO  | Importing image Cirros 0.6.2 2026-02-17 04:48:54.688967 | orchestrator | 2026-02-17 04:48:15 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-17 04:48:54.688981 | orchestrator | 2026-02-17 04:48:17 | INFO  | Waiting for image to leave queued state... 2026-02-17 04:48:54.688999 | orchestrator | 2026-02-17 04:48:19 | INFO  | Waiting for import to complete... 2026-02-17 04:48:54.689015 | orchestrator | 2026-02-17 04:48:29 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-17 04:48:54.689032 | orchestrator | 2026-02-17 04:48:30 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-17 04:48:54.689048 | orchestrator | 2026-02-17 04:48:30 | INFO  | Setting internal_version = 0.6.2 2026-02-17 04:48:54.689064 | orchestrator | 2026-02-17 04:48:30 | INFO  | Setting image_original_user = cirros 2026-02-17 04:48:54.689080 | orchestrator | 2026-02-17 04:48:30 | INFO  | Adding tag os:cirros 2026-02-17 04:48:54.689097 | orchestrator | 2026-02-17 04:48:30 | INFO  | Setting property architecture: x86_64 2026-02-17 04:48:54.689113 | orchestrator | 2026-02-17 04:48:30 | INFO  | Setting property hw_disk_bus: scsi 2026-02-17 04:48:54.689129 | orchestrator | 2026-02-17 04:48:30 | INFO  | Setting property hw_rng_model: virtio 2026-02-17 04:48:54.689147 | orchestrator | 2026-02-17 04:48:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-17 04:48:54.689162 | orchestrator | 2026-02-17 04:48:31 | INFO  | Setting property hw_watchdog_action: reset 2026-02-17 04:48:54.689212 | orchestrator | 2026-02-17 04:48:31 | INFO  | Setting property hypervisor_type: qemu 2026-02-17 04:48:54.689233 | orchestrator | 2026-02-17 04:48:31 | INFO  | Setting property os_distro: cirros 2026-02-17 04:48:54.689246 | orchestrator | 2026-02-17 04:48:32 | INFO  | Setting property os_purpose: minimal 2026-02-17 04:48:54.689257 | orchestrator | 2026-02-17 04:48:32 | INFO  | Setting property replace_frequency: never 2026-02-17 04:48:54.689268 | orchestrator | 2026-02-17 04:48:32 | INFO  | Setting property uuid_validity: none 2026-02-17 04:48:54.689280 | orchestrator | 2026-02-17 04:48:32 | INFO  | Setting property provided_until: none 2026-02-17 04:48:54.689291 | orchestrator | 2026-02-17 04:48:33 | INFO  | Setting property image_description: Cirros 2026-02-17 04:48:54.689303 | orchestrator | 2026-02-17 04:48:33 | INFO  | Setting property image_name: Cirros 2026-02-17 04:48:54.689314 | orchestrator | 2026-02-17 04:48:33 | INFO  | Setting property internal_version: 0.6.2 2026-02-17 04:48:54.689325 | orchestrator | 2026-02-17 04:48:33 | INFO  | Setting property image_original_user: cirros 2026-02-17 04:48:54.689387 | orchestrator | 2026-02-17 04:48:34 | INFO  | Setting property os_version: 0.6.2 2026-02-17 04:48:54.689419 | orchestrator | 2026-02-17 04:48:34 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-17 04:48:54.689432 | orchestrator | 2026-02-17 04:48:34 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-17 04:48:54.689444 | orchestrator | 2026-02-17 04:48:34 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-17 04:48:54.689454 | orchestrator | 2026-02-17 04:48:34 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-17 04:48:54.689465 | orchestrator | 2026-02-17 04:48:34 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-17 04:48:54.689476 | orchestrator | 2026-02-17 04:48:35 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-17 04:48:54.689493 | orchestrator | 2026-02-17 04:48:35 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-17 04:48:54.689504 | orchestrator | 2026-02-17 04:48:35 | INFO  | Importing image Cirros 0.6.3 2026-02-17 04:48:54.689516 | orchestrator | 2026-02-17 04:48:35 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-17 04:48:54.689530 | orchestrator | 2026-02-17 04:48:36 | INFO  | Waiting for image to leave queued state... 2026-02-17 04:48:54.689546 | orchestrator | 2026-02-17 04:48:38 | INFO  | Waiting for import to complete... 2026-02-17 04:48:54.689585 | orchestrator | 2026-02-17 04:48:48 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-17 04:48:54.689604 | orchestrator | 2026-02-17 04:48:48 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-17 04:48:54.689616 | orchestrator | 2026-02-17 04:48:48 | INFO  | Setting internal_version = 0.6.3 2026-02-17 04:48:54.689632 | orchestrator | 2026-02-17 04:48:48 | INFO  | Setting image_original_user = cirros 2026-02-17 04:48:54.689648 | orchestrator | 2026-02-17 04:48:48 | INFO  | Adding tag os:cirros 2026-02-17 04:48:54.689663 | orchestrator | 2026-02-17 04:48:48 | INFO  | Setting property architecture: x86_64 2026-02-17 04:48:54.689677 | orchestrator | 2026-02-17 04:48:49 | INFO  | Setting property hw_disk_bus: scsi 2026-02-17 04:48:54.689693 | orchestrator | 2026-02-17 04:48:49 | INFO  | Setting property hw_rng_model: virtio 2026-02-17 04:48:54.689708 | orchestrator | 2026-02-17 04:48:49 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-17 04:48:54.689724 | orchestrator | 2026-02-17 04:48:50 | INFO  | Setting property hw_watchdog_action: reset 2026-02-17 04:48:54.689740 | orchestrator | 2026-02-17 04:48:50 | INFO  | Setting property hypervisor_type: qemu 2026-02-17 04:48:54.689756 | orchestrator | 2026-02-17 04:48:50 | INFO  | Setting property os_distro: cirros 2026-02-17 04:48:54.689766 | orchestrator | 2026-02-17 04:48:50 | INFO  | Setting property os_purpose: minimal 2026-02-17 04:48:54.689776 | orchestrator | 2026-02-17 04:48:51 | INFO  | Setting property replace_frequency: never 2026-02-17 04:48:54.689786 | orchestrator | 2026-02-17 04:48:51 | INFO  | Setting property uuid_validity: none 2026-02-17 04:48:54.689801 | orchestrator | 2026-02-17 04:48:51 | INFO  | Setting property provided_until: none 2026-02-17 04:48:54.689817 | orchestrator | 2026-02-17 04:48:51 | INFO  | Setting property image_description: Cirros 2026-02-17 04:48:54.689834 | orchestrator | 2026-02-17 04:48:52 | INFO  | Setting property image_name: Cirros 2026-02-17 04:48:54.689850 | orchestrator | 2026-02-17 04:48:52 | INFO  | Setting property internal_version: 0.6.3 2026-02-17 04:48:54.689881 | orchestrator | 2026-02-17 04:48:52 | INFO  | Setting property image_original_user: cirros 2026-02-17 04:48:54.689897 | orchestrator | 2026-02-17 04:48:52 | INFO  | Setting property os_version: 0.6.3 2026-02-17 04:48:54.689907 | orchestrator | 2026-02-17 04:48:53 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-17 04:48:54.689917 | orchestrator | 2026-02-17 04:48:53 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-17 04:48:54.689927 | orchestrator | 2026-02-17 04:48:53 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-17 04:48:54.689941 | orchestrator | 2026-02-17 04:48:53 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-17 04:48:54.689957 | orchestrator | 2026-02-17 04:48:53 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-17 04:48:55.006449 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-17 04:48:57.410669 | orchestrator | 2026-02-17 04:48:57 | INFO  | date: 2026-02-17 2026-02-17 04:48:57.410774 | orchestrator | 2026-02-17 04:48:57 | INFO  | image: octavia-amphora-haproxy-2024.2.20260217.qcow2 2026-02-17 04:48:57.410814 | orchestrator | 2026-02-17 04:48:57 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260217.qcow2 2026-02-17 04:48:57.410829 | orchestrator | 2026-02-17 04:48:57 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260217.qcow2.CHECKSUM 2026-02-17 04:48:58.965405 | orchestrator | 2026-02-17 04:48:58 | INFO  | checksum: 6040bc1e685fb2dac7da1f9d913ae96b24fbe08f0d53fcbf4529d64b85510887 2026-02-17 04:48:59.040312 | orchestrator | 2026-02-17 04:48:59 | INFO  | It takes a moment until task 4a8bcefd-359d-4a11-8bd1-7b4d36e09efe (image-manager) has been started and output is visible here. 2026-02-17 04:50:22.421831 | orchestrator | 2026-02-17 04:49:01 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-17' 2026-02-17 04:50:22.421948 | orchestrator | 2026-02-17 04:49:01 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260217.qcow2: 200 2026-02-17 04:50:22.421966 | orchestrator | 2026-02-17 04:49:01 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-17 2026-02-17 04:50:22.421978 | orchestrator | 2026-02-17 04:49:01 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260217.qcow2 2026-02-17 04:50:22.421991 | orchestrator | 2026-02-17 04:49:03 | INFO  | Waiting for image to leave queued state... 2026-02-17 04:50:22.422002 | orchestrator | 2026-02-17 04:49:05 | INFO  | Waiting for import to complete... 2026-02-17 04:50:22.422081 | orchestrator | 2026-02-17 04:49:15 | INFO  | Waiting for import to complete... 2026-02-17 04:50:22.422097 | orchestrator | 2026-02-17 04:49:25 | INFO  | Waiting for import to complete... 2026-02-17 04:50:22.422108 | orchestrator | 2026-02-17 04:49:35 | INFO  | Waiting for import to complete... 2026-02-17 04:50:22.422122 | orchestrator | 2026-02-17 04:49:45 | INFO  | Waiting for import to complete... 2026-02-17 04:50:22.422134 | orchestrator | 2026-02-17 04:49:55 | INFO  | Waiting for import to complete... 2026-02-17 04:50:22.422146 | orchestrator | 2026-02-17 04:50:05 | INFO  | Waiting for import to complete... 2026-02-17 04:50:22.422157 | orchestrator | 2026-02-17 04:50:15 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-17' successfully completed, reloading images 2026-02-17 04:50:22.422169 | orchestrator | 2026-02-17 04:50:16 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-17' 2026-02-17 04:50:22.422207 | orchestrator | 2026-02-17 04:50:16 | INFO  | Setting internal_version = 2026-02-17 2026-02-17 04:50:22.422219 | orchestrator | 2026-02-17 04:50:16 | INFO  | Setting image_original_user = ubuntu 2026-02-17 04:50:22.422231 | orchestrator | 2026-02-17 04:50:16 | INFO  | Adding tag amphora 2026-02-17 04:50:22.422243 | orchestrator | 2026-02-17 04:50:16 | INFO  | Adding tag os:ubuntu 2026-02-17 04:50:22.422254 | orchestrator | 2026-02-17 04:50:17 | INFO  | Setting property architecture: x86_64 2026-02-17 04:50:22.422264 | orchestrator | 2026-02-17 04:50:17 | INFO  | Setting property hw_disk_bus: scsi 2026-02-17 04:50:22.422275 | orchestrator | 2026-02-17 04:50:17 | INFO  | Setting property hw_rng_model: virtio 2026-02-17 04:50:22.422286 | orchestrator | 2026-02-17 04:50:17 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-17 04:50:22.422326 | orchestrator | 2026-02-17 04:50:18 | INFO  | Setting property hw_watchdog_action: reset 2026-02-17 04:50:22.422338 | orchestrator | 2026-02-17 04:50:18 | INFO  | Setting property hypervisor_type: qemu 2026-02-17 04:50:22.422349 | orchestrator | 2026-02-17 04:50:18 | INFO  | Setting property os_distro: ubuntu 2026-02-17 04:50:22.422361 | orchestrator | 2026-02-17 04:50:18 | INFO  | Setting property replace_frequency: quarterly 2026-02-17 04:50:22.422373 | orchestrator | 2026-02-17 04:50:19 | INFO  | Setting property uuid_validity: last-1 2026-02-17 04:50:22.422387 | orchestrator | 2026-02-17 04:50:19 | INFO  | Setting property provided_until: none 2026-02-17 04:50:22.422399 | orchestrator | 2026-02-17 04:50:19 | INFO  | Setting property os_purpose: network 2026-02-17 04:50:22.422426 | orchestrator | 2026-02-17 04:50:19 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-17 04:50:22.422439 | orchestrator | 2026-02-17 04:50:20 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-17 04:50:22.422451 | orchestrator | 2026-02-17 04:50:20 | INFO  | Setting property internal_version: 2026-02-17 2026-02-17 04:50:22.422464 | orchestrator | 2026-02-17 04:50:20 | INFO  | Setting property image_original_user: ubuntu 2026-02-17 04:50:22.422476 | orchestrator | 2026-02-17 04:50:21 | INFO  | Setting property os_version: 2026-02-17 2026-02-17 04:50:22.422489 | orchestrator | 2026-02-17 04:50:21 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260217.qcow2 2026-02-17 04:50:22.422502 | orchestrator | 2026-02-17 04:50:21 | INFO  | Setting property image_build_date: 2026-02-17 2026-02-17 04:50:22.422514 | orchestrator | 2026-02-17 04:50:21 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-17' 2026-02-17 04:50:22.422544 | orchestrator | 2026-02-17 04:50:21 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-17' 2026-02-17 04:50:22.422555 | orchestrator | 2026-02-17 04:50:22 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-17 04:50:22.422566 | orchestrator | 2026-02-17 04:50:22 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-17 04:50:22.422578 | orchestrator | 2026-02-17 04:50:22 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-17 04:50:22.422589 | orchestrator | 2026-02-17 04:50:22 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-17 04:50:22.880827 | orchestrator | ok: Runtime: 0:03:18.774606 2026-02-17 04:50:22.898355 | 2026-02-17 04:50:22.898491 | TASK [Run checks] 2026-02-17 04:50:23.666824 | orchestrator | + set -e 2026-02-17 04:50:23.666994 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 04:50:23.667014 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 04:50:23.667032 | orchestrator | ++ INTERACTIVE=false 2026-02-17 04:50:23.667043 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 04:50:23.667054 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 04:50:23.667065 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-17 04:50:23.668247 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-17 04:50:23.675423 | orchestrator | 2026-02-17 04:50:23.675487 | orchestrator | # CHECK 2026-02-17 04:50:23.675496 | orchestrator | 2026-02-17 04:50:23.675505 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 04:50:23.675517 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 04:50:23.675526 | orchestrator | + echo 2026-02-17 04:50:23.675534 | orchestrator | + echo '# CHECK' 2026-02-17 04:50:23.675542 | orchestrator | + echo 2026-02-17 04:50:23.675555 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-17 04:50:23.676036 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-17 04:50:23.750758 | orchestrator | 2026-02-17 04:50:23.750867 | orchestrator | ## Containers @ testbed-manager 2026-02-17 04:50:23.750884 | orchestrator | 2026-02-17 04:50:23.750898 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-17 04:50:23.750910 | orchestrator | + echo 2026-02-17 04:50:23.750922 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-17 04:50:23.750934 | orchestrator | + echo 2026-02-17 04:50:23.750946 | orchestrator | + osism container testbed-manager ps 2026-02-17 04:50:25.741766 | orchestrator | 2026-02-17 04:50:25 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-17 04:50:26.168383 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-17 04:50:26.168596 | orchestrator | 193223e5ab58 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-17 04:50:26.168622 | orchestrator | cdf07d29bda4 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-17 04:50:26.168635 | orchestrator | 00273d003d5e registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-17 04:50:26.168664 | orchestrator | 7df0541ffb10 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-17 04:50:26.168677 | orchestrator | 9be0e1c56911 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-17 04:50:26.168692 | orchestrator | e21fce06f47c registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 57 minutes ago Up 57 minutes cephclient 2026-02-17 04:50:26.168704 | orchestrator | e64177e29312 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-17 04:50:26.168716 | orchestrator | 848d1047ed52 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-17 04:50:26.168753 | orchestrator | 2fe65c306d12 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-17 04:50:26.168767 | orchestrator | c5594b892dd7 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-17 04:50:26.168778 | orchestrator | b51b8d6cb9cc phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-17 04:50:26.168842 | orchestrator | 7412874219df registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-17 04:50:26.168866 | orchestrator | 58db7f1c5864 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-17 04:50:26.168886 | orchestrator | 36e8e6019ee4 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-17 04:50:26.168908 | orchestrator | 3570b0bd6590 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-17 04:50:26.168945 | orchestrator | ff7bd498de14 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-17 04:50:26.168976 | orchestrator | 6c6a21ce7074 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-17 04:50:26.168996 | orchestrator | d4d76fcb2746 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-17 04:50:26.169372 | orchestrator | fc59feca3276 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-17 04:50:26.169401 | orchestrator | f39a40f828a3 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-17 04:50:26.169412 | orchestrator | f563eb56e3bc registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-17 04:50:26.169424 | orchestrator | 8da57b8dea2f registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-17 04:50:26.169810 | orchestrator | 995f7b9f5098 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-17 04:50:26.169842 | orchestrator | 71213d6a2347 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-17 04:50:26.169863 | orchestrator | a6372d807b1e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-17 04:50:26.169882 | orchestrator | dfc1087a392f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-17 04:50:26.169902 | orchestrator | 2de9b8b0027d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-17 04:50:26.169913 | orchestrator | a6e487690f43 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-17 04:50:26.169924 | orchestrator | e02df74864ab registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-17 04:50:26.169945 | orchestrator | 07ea425f875f registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-17 04:50:26.480040 | orchestrator | 2026-02-17 04:50:26.480113 | orchestrator | ## Images @ testbed-manager 2026-02-17 04:50:26.480120 | orchestrator | 2026-02-17 04:50:26.480124 | orchestrator | + echo 2026-02-17 04:50:26.480128 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-17 04:50:26.480133 | orchestrator | + echo 2026-02-17 04:50:26.480140 | orchestrator | + osism container testbed-manager images 2026-02-17 04:50:28.906470 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-17 04:50:28.906592 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 a3a35e47054f 25 hours ago 239MB 2026-02-17 04:50:28.906609 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 weeks ago 41.4MB 2026-02-17 04:50:28.906622 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-17 04:50:28.906634 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-17 04:50:28.906645 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-17 04:50:28.906657 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-17 04:50:28.906668 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-17 04:50:28.906681 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-17 04:50:28.906692 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-17 04:50:28.906729 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-17 04:50:28.906741 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-17 04:50:28.906752 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-17 04:50:28.906762 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-17 04:50:28.906780 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-17 04:50:28.906799 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-17 04:50:28.906818 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-17 04:50:28.906836 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-17 04:50:28.906855 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-17 04:50:28.906872 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-17 04:50:28.906889 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-17 04:50:28.906908 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-17 04:50:28.906926 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-17 04:50:28.906946 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-17 04:50:28.906960 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-17 04:50:28.906971 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-17 04:50:29.216801 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-17 04:50:29.217744 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-17 04:50:29.269791 | orchestrator | 2026-02-17 04:50:29.269915 | orchestrator | ## Containers @ testbed-node-0 2026-02-17 04:50:29.269939 | orchestrator | 2026-02-17 04:50:29.269951 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-17 04:50:29.269962 | orchestrator | + echo 2026-02-17 04:50:29.269974 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-17 04:50:29.269986 | orchestrator | + echo 2026-02-17 04:50:29.269997 | orchestrator | + osism container testbed-node-0 ps 2026-02-17 04:50:31.721959 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-17 04:50:31.722200 | orchestrator | 169ecf9aaf64 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-17 04:50:31.722240 | orchestrator | 34abd0e8311f registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-17 04:50:31.722251 | orchestrator | 29db5156e4df registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-17 04:50:31.722261 | orchestrator | ee3061d180b9 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-17 04:50:31.722296 | orchestrator | 43e42dc2f2da registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-17 04:50:31.722307 | orchestrator | f1995df839ad registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-02-17 04:50:31.722441 | orchestrator | 8d513d130a31 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-17 04:50:31.722454 | orchestrator | 421eacc69f6b registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-17 04:50:31.722464 | orchestrator | 70d8de6d7047 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-17 04:50:31.722481 | orchestrator | 8e87e202bf58 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-17 04:50:31.722497 | orchestrator | 9156f8458655 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-17 04:50:31.722515 | orchestrator | 2db52cb354f7 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-17 04:50:31.722532 | orchestrator | cdfba35a4af9 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-17 04:50:31.722547 | orchestrator | ed8a8d4ccfe5 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-17 04:50:31.722557 | orchestrator | 4bbc31abdb73 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-17 04:50:31.722567 | orchestrator | 6331d2400e16 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-17 04:50:31.722577 | orchestrator | dad610f780db registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-17 04:50:31.722586 | orchestrator | 26938107481b registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-17 04:50:31.722595 | orchestrator | 5fdb1ad4c0ff registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-17 04:50:31.722631 | orchestrator | 2c1fa2edf58c registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-17 04:50:31.722642 | orchestrator | 1b99924c3f2b registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-17 04:50:31.722651 | orchestrator | 157b3388ccfe registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-17 04:50:31.722672 | orchestrator | e7da30f4c83f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-17 04:50:31.722688 | orchestrator | 821207d209f6 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-17 04:50:31.722705 | orchestrator | f73cb92c0982 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-17 04:50:31.722728 | orchestrator | ab7da46f56e5 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-17 04:50:31.722746 | orchestrator | eaa83f8a9ecb registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-17 04:50:31.722756 | orchestrator | a5dcf000151b registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-17 04:50:31.722765 | orchestrator | 212ca66f57d0 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-17 04:50:31.722775 | orchestrator | ed95a57e05db registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-17 04:50:31.722785 | orchestrator | 60142e1cb362 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-17 04:50:31.722803 | orchestrator | 841ea78ed743 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-17 04:50:31.722814 | orchestrator | ce641b599f4f registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-17 04:50:31.722823 | orchestrator | df4ffa10f005 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-17 04:50:31.722833 | orchestrator | 11cd2d99a197 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-17 04:50:31.722842 | orchestrator | b75a39b915a3 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-17 04:50:31.722852 | orchestrator | e83f411add43 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-17 04:50:31.722862 | orchestrator | ab0d142921c4 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-17 04:50:31.722871 | orchestrator | 7dd126af06a3 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-17 04:50:31.722881 | orchestrator | 9aa75c979e24 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-17 04:50:31.722898 | orchestrator | 8e86f15bdbad registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-17 04:50:31.722908 | orchestrator | 4a7278d2e292 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-17 04:50:31.722924 | orchestrator | 05ddb96014ce registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-17 04:50:31.722940 | orchestrator | cb5e7fd4e8c8 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-17 04:50:31.722955 | orchestrator | 8b3c99512070 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-17 04:50:31.722972 | orchestrator | fc2bf0b1bce5 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-17 04:50:31.722987 | orchestrator | e60173c192cd registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-17 04:50:31.723003 | orchestrator | 7a5dea40e5ad registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-17 04:50:31.723019 | orchestrator | 3f5fe0599664 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-17 04:50:31.723037 | orchestrator | d7d33a721536 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-0 2026-02-17 04:50:31.723053 | orchestrator | 7aa125f121d3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-17 04:50:31.723080 | orchestrator | 6b2dae68d29f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-17 04:50:31.723097 | orchestrator | 516b91cb6e5f registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-17 04:50:31.723108 | orchestrator | 0e7109880462 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-17 04:50:31.723117 | orchestrator | 302e8e0f620b registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-17 04:50:31.723127 | orchestrator | 95eb4b24aea9 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-17 04:50:31.723142 | orchestrator | 9af7d9409153 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-17 04:50:31.723152 | orchestrator | 2a250655c5a4 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-17 04:50:31.723169 | orchestrator | 6dd9116d76b4 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-17 04:50:31.723179 | orchestrator | 713f94c5b3c1 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-17 04:50:31.723188 | orchestrator | dee8ed0ea395 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-17 04:50:31.723198 | orchestrator | bdb0b280af80 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-17 04:50:31.723208 | orchestrator | cfeb760a402f registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-17 04:50:31.723217 | orchestrator | adae34d945b9 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-17 04:50:31.723227 | orchestrator | 6018f65701cf registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-17 04:50:31.723236 | orchestrator | ae15a06ddee1 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-17 04:50:31.723245 | orchestrator | 342facfd0105 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-17 04:50:31.723255 | orchestrator | 87cb046a2836 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-17 04:50:31.723265 | orchestrator | fc556ef261fa registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-17 04:50:31.723274 | orchestrator | 8eb19cfb43a3 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-17 04:50:31.723284 | orchestrator | ec56d01e23ba registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-17 04:50:32.030768 | orchestrator | 2026-02-17 04:50:32.030899 | orchestrator | ## Images @ testbed-node-0 2026-02-17 04:50:32.030929 | orchestrator | 2026-02-17 04:50:32.030950 | orchestrator | + echo 2026-02-17 04:50:32.030971 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-17 04:50:32.030991 | orchestrator | + echo 2026-02-17 04:50:32.031010 | orchestrator | + osism container testbed-node-0 images 2026-02-17 04:50:34.439059 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-17 04:50:34.439226 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-17 04:50:34.439260 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-17 04:50:34.439281 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-17 04:50:34.439300 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-17 04:50:34.439409 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-17 04:50:34.439433 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-17 04:50:34.439445 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-17 04:50:34.439456 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-17 04:50:34.439467 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-17 04:50:34.439478 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-17 04:50:34.439489 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-17 04:50:34.439499 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-17 04:50:34.439510 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-17 04:50:34.439521 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-17 04:50:34.439532 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-17 04:50:34.439544 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-17 04:50:34.439557 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-17 04:50:34.439569 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-17 04:50:34.439582 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-17 04:50:34.439593 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-17 04:50:34.439606 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-17 04:50:34.439618 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-17 04:50:34.439630 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-17 04:50:34.439642 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-17 04:50:34.439654 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-17 04:50:34.439666 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-17 04:50:34.439698 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-17 04:50:34.439719 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-17 04:50:34.439733 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-17 04:50:34.439746 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-17 04:50:34.439767 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-17 04:50:34.439780 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-17 04:50:34.439791 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-17 04:50:34.439802 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-17 04:50:34.439813 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-17 04:50:34.439824 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-17 04:50:34.439834 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-17 04:50:34.439845 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-17 04:50:34.439856 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-17 04:50:34.439866 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-17 04:50:34.439877 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-17 04:50:34.439888 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-17 04:50:34.439899 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-17 04:50:34.439910 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-17 04:50:34.439920 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-17 04:50:34.439931 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-17 04:50:34.439943 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-17 04:50:34.439953 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-17 04:50:34.439964 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-17 04:50:34.439975 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-17 04:50:34.439986 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-17 04:50:34.439997 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-17 04:50:34.440007 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-17 04:50:34.440018 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-17 04:50:34.440029 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-17 04:50:34.440039 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-17 04:50:34.440056 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-17 04:50:34.440067 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-17 04:50:34.440083 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-17 04:50:34.440105 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-17 04:50:34.440116 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-17 04:50:34.440127 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-17 04:50:34.440138 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-17 04:50:34.440149 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-17 04:50:34.440160 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-17 04:50:34.440170 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-17 04:50:34.440181 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-17 04:50:34.440192 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-17 04:50:34.440203 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-17 04:50:34.765542 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-17 04:50:34.766005 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-17 04:50:34.833857 | orchestrator | 2026-02-17 04:50:34.833942 | orchestrator | ## Containers @ testbed-node-1 2026-02-17 04:50:34.833960 | orchestrator | 2026-02-17 04:50:34.833971 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-17 04:50:34.833981 | orchestrator | + echo 2026-02-17 04:50:34.833991 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-17 04:50:34.834002 | orchestrator | + echo 2026-02-17 04:50:34.834012 | orchestrator | + osism container testbed-node-1 ps 2026-02-17 04:50:37.265542 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-17 04:50:37.265643 | orchestrator | f98dc1c321b2 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-17 04:50:37.265659 | orchestrator | 1345c81fff93 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-17 04:50:37.265672 | orchestrator | bf96bb4bfae5 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-17 04:50:37.265684 | orchestrator | 9e4fc70199be registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-17 04:50:37.265698 | orchestrator | 6202915968c4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-17 04:50:37.265709 | orchestrator | aeca17970a61 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-02-17 04:50:37.265741 | orchestrator | 6570a4fc0a08 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-17 04:50:37.265753 | orchestrator | 9fb03747a098 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-17 04:50:37.265765 | orchestrator | bbab0cabcbe6 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-17 04:50:37.265777 | orchestrator | 0cb32abef6f7 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-17 04:50:37.265788 | orchestrator | a3cd09f11d4e registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-17 04:50:37.265800 | orchestrator | b4829a27cb27 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) manila_api 2026-02-17 04:50:37.265820 | orchestrator | 8a40f188c06f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-17 04:50:37.265832 | orchestrator | 6277d26e8402 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-17 04:50:37.265843 | orchestrator | a21bb67c4bec registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-17 04:50:37.265855 | orchestrator | 4fbebb6f4b4d registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-17 04:50:37.265866 | orchestrator | 1dc6344a8a3f registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-17 04:50:37.265877 | orchestrator | a9000b2e3ba0 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-17 04:50:37.265889 | orchestrator | de3fabc5fc11 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-17 04:50:37.265920 | orchestrator | d569c5ca4cc6 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-17 04:50:37.265932 | orchestrator | 3200223e390f registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-17 04:50:37.265943 | orchestrator | f46ca76f6767 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-17 04:50:37.265954 | orchestrator | 27f7b8628ad8 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-17 04:50:37.265965 | orchestrator | e089c79ef7fa registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-17 04:50:37.265983 | orchestrator | f19714e85b65 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-17 04:50:37.265994 | orchestrator | c0c5f2c4101a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-17 04:50:37.266005 | orchestrator | c32cd9e4178e registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-17 04:50:37.266078 | orchestrator | 8bd4514fa8a4 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-17 04:50:37.266093 | orchestrator | 56eb43e3fb7b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-17 04:50:37.266106 | orchestrator | 386262931e2a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-17 04:50:37.266118 | orchestrator | 935188c39874 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-17 04:50:37.266132 | orchestrator | 5ecc426c7aad registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-17 04:50:37.266144 | orchestrator | 03b3a602ad98 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-17 04:50:37.266157 | orchestrator | d1d85beb99af registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-17 04:50:37.266170 | orchestrator | f926a3529664 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-17 04:50:37.266183 | orchestrator | 67fab62f3215 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-17 04:50:37.266201 | orchestrator | 1d2f077a11d7 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-17 04:50:37.266213 | orchestrator | c327424a6b5a registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-17 04:50:37.266226 | orchestrator | 311f85a6be86 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-17 04:50:37.266247 | orchestrator | 9d109f40fc4a registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-17 04:50:37.266261 | orchestrator | 72776c82d358 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-17 04:50:37.266280 | orchestrator | 93549f951497 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-17 04:50:37.266293 | orchestrator | b4ade024b934 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-17 04:50:37.266305 | orchestrator | 0805a1684d8d registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-17 04:50:37.266357 | orchestrator | cff7396b88fc registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-17 04:50:37.266373 | orchestrator | ad37525ed071 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-17 04:50:37.266385 | orchestrator | d46b5c91152a registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-17 04:50:37.266398 | orchestrator | 368e6092c8e8 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-17 04:50:37.266411 | orchestrator | 160af448a412 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-02-17 04:50:37.266422 | orchestrator | 010c06c0657f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-1 2026-02-17 04:50:37.266433 | orchestrator | 88c9595447e1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-17 04:50:37.266444 | orchestrator | 5939893342f8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-17 04:50:37.266455 | orchestrator | da28a83e4723 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-17 04:50:37.266466 | orchestrator | f3d6d8fc63cd registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-17 04:50:37.266477 | orchestrator | 46f899c8fe99 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-17 04:50:37.266488 | orchestrator | 9c0d07d45e38 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-17 04:50:37.266499 | orchestrator | 02ec6f0baf7b registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-17 04:50:37.266509 | orchestrator | 81427feeb665 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-17 04:50:37.266520 | orchestrator | 651977ddbeec registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-17 04:50:37.266543 | orchestrator | ffa7f1f45364 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-17 04:50:37.266555 | orchestrator | 06a0ef34aa35 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-17 04:50:37.266566 | orchestrator | fe1c9735ee1d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-17 04:50:37.266577 | orchestrator | 01640b73666e registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-17 04:50:37.266588 | orchestrator | 256851c9508b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-17 04:50:37.266604 | orchestrator | 34e2a15d07e4 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-17 04:50:37.266616 | orchestrator | b214efca5d0e registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-17 04:50:37.266627 | orchestrator | 2b05f5b75d78 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-17 04:50:37.266638 | orchestrator | 876f41264c87 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-17 04:50:37.266649 | orchestrator | af8d97d51fd2 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-17 04:50:37.266664 | orchestrator | 163d83b88724 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-17 04:50:37.266675 | orchestrator | 6221df27bae4 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-17 04:50:37.593391 | orchestrator | 2026-02-17 04:50:37.593487 | orchestrator | ## Images @ testbed-node-1 2026-02-17 04:50:37.593502 | orchestrator | 2026-02-17 04:50:37.593514 | orchestrator | + echo 2026-02-17 04:50:37.593526 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-17 04:50:37.593538 | orchestrator | + echo 2026-02-17 04:50:37.593549 | orchestrator | + osism container testbed-node-1 images 2026-02-17 04:50:40.042940 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-17 04:50:40.043047 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-17 04:50:40.043062 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-17 04:50:40.043073 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-17 04:50:40.043085 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-17 04:50:40.043097 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-17 04:50:40.043108 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-17 04:50:40.043143 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-17 04:50:40.043154 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-17 04:50:40.043165 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-17 04:50:40.043177 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-17 04:50:40.043187 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-17 04:50:40.043198 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-17 04:50:40.043209 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-17 04:50:40.043220 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-17 04:50:40.043231 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-17 04:50:40.043242 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-17 04:50:40.043253 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-17 04:50:40.043264 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-17 04:50:40.043274 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-17 04:50:40.043285 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-17 04:50:40.043296 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-17 04:50:40.043307 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-17 04:50:40.043318 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-17 04:50:40.043378 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-17 04:50:40.043389 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-17 04:50:40.043400 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-17 04:50:40.043411 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-17 04:50:40.043422 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-17 04:50:40.043433 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-17 04:50:40.043444 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-17 04:50:40.043455 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-17 04:50:40.043483 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-17 04:50:40.043507 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-17 04:50:40.043520 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-17 04:50:40.043532 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-17 04:50:40.043545 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-17 04:50:40.043557 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-17 04:50:40.043587 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-17 04:50:40.043600 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-17 04:50:40.043613 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-17 04:50:40.043625 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-17 04:50:40.043638 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-17 04:50:40.043650 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-17 04:50:40.043663 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-17 04:50:40.043676 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-17 04:50:40.043689 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-17 04:50:40.043702 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-17 04:50:40.043715 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-17 04:50:40.043728 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-17 04:50:40.043741 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-17 04:50:40.043754 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-17 04:50:40.043766 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-17 04:50:40.043779 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-17 04:50:40.043792 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-17 04:50:40.043804 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-17 04:50:40.043818 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-17 04:50:40.043830 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-17 04:50:40.043841 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-17 04:50:40.043851 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-17 04:50:40.043869 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-17 04:50:40.043880 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-17 04:50:40.043891 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-17 04:50:40.043902 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-17 04:50:40.043920 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-17 04:50:40.043931 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-17 04:50:40.043942 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-17 04:50:40.043953 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-17 04:50:40.043964 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-17 04:50:40.043975 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-17 04:50:40.378460 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-17 04:50:40.378564 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-17 04:50:40.436675 | orchestrator | 2026-02-17 04:50:40.436744 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-17 04:50:40.436750 | orchestrator | + echo 2026-02-17 04:50:40.436756 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-17 04:50:40.436987 | orchestrator | ## Containers @ testbed-node-2 2026-02-17 04:50:40.437000 | orchestrator | 2026-02-17 04:50:40.437005 | orchestrator | + echo 2026-02-17 04:50:40.437010 | orchestrator | + osism container testbed-node-2 ps 2026-02-17 04:50:42.876079 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-17 04:50:42.876182 | orchestrator | 6df4f9183037 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-17 04:50:42.876198 | orchestrator | b5deb4a2e953 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-17 04:50:42.876210 | orchestrator | 705cb5a89331 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-17 04:50:42.876221 | orchestrator | 99e5e57bc222 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-17 04:50:42.876235 | orchestrator | 3fae32685145 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-17 04:50:42.876246 | orchestrator | 5e486a2c19b1 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-17 04:50:42.876258 | orchestrator | 06da7d2c5f01 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-17 04:50:42.876270 | orchestrator | 0f91ffc67222 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-17 04:50:42.876303 | orchestrator | ce7aebf1a952 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-17 04:50:42.876315 | orchestrator | ae69a93e3b0c registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-17 04:50:42.876392 | orchestrator | cf4aaceaad33 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-17 04:50:42.876404 | orchestrator | e9c7c6393866 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-17 04:50:42.876438 | orchestrator | 38f0563f3c45 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-17 04:50:42.876450 | orchestrator | 98e0da3c33a1 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-17 04:50:42.876461 | orchestrator | eba6a66980a1 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-17 04:50:42.876552 | orchestrator | 92bca6e7f72a registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-17 04:50:42.876573 | orchestrator | 884f15b2302a registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-17 04:50:42.876594 | orchestrator | 997508577a0a registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-17 04:50:42.876613 | orchestrator | 889a2981ca4e registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-17 04:50:42.876657 | orchestrator | 0f56aa6a62fc registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-17 04:50:42.876680 | orchestrator | 1d880956dc54 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-17 04:50:42.876696 | orchestrator | d302c2d6de11 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-17 04:50:42.876710 | orchestrator | 1da8834a7e60 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-17 04:50:42.876722 | orchestrator | 3a6edb026510 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-17 04:50:42.876736 | orchestrator | 07a56c1f4648 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-17 04:50:42.876758 | orchestrator | 7dd385d0e432 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-17 04:50:42.876769 | orchestrator | 8d67d0afb1ee registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-17 04:50:42.876779 | orchestrator | e2bf56a0e8d3 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-17 04:50:42.876790 | orchestrator | 82bac2ba5c45 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-17 04:50:42.876801 | orchestrator | 9e9d0669d454 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-17 04:50:42.876812 | orchestrator | 3ecdd65c9ccb registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-17 04:50:42.876823 | orchestrator | 6d2db3a25d7f registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-17 04:50:42.876834 | orchestrator | 8806b5211433 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-17 04:50:42.876844 | orchestrator | 26d18d7b33fb registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-17 04:50:42.876855 | orchestrator | 2f5f0c58c66b registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-17 04:50:42.876866 | orchestrator | 7fec88a82766 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-17 04:50:42.876877 | orchestrator | 8bf228e8ef33 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-17 04:50:42.876888 | orchestrator | dbc1d2c9f7bf registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-17 04:50:42.876898 | orchestrator | 077cc49bd40f registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-17 04:50:42.876917 | orchestrator | 01f087447014 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-17 04:50:42.876928 | orchestrator | 63346aa4fcdf registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-17 04:50:42.876939 | orchestrator | 6de197d8f05a registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-17 04:50:42.876950 | orchestrator | 41cafa555d7a registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-17 04:50:42.876968 | orchestrator | e3e22a2a85e9 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-17 04:50:42.876979 | orchestrator | 845ea493e604 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-17 04:50:42.876991 | orchestrator | daa5146830e2 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-17 04:50:42.877002 | orchestrator | 6e42f0ccfb30 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-17 04:50:42.877013 | orchestrator | f1d6922fec89 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-17 04:50:42.877023 | orchestrator | 860c536395b4 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-17 04:50:42.877144 | orchestrator | 7445d435cece registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-2 2026-02-17 04:50:42.877158 | orchestrator | e3d295ccb805 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-17 04:50:42.877176 | orchestrator | 4f72f9ce519e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-17 04:50:42.877188 | orchestrator | f5ec897c5b8f registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-17 04:50:42.877203 | orchestrator | ee7c0d873f4b registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-17 04:50:42.877214 | orchestrator | 2570f5f0c7b9 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-17 04:50:42.877225 | orchestrator | aff6c17c337c registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-17 04:50:42.877236 | orchestrator | 15dac44f2895 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-17 04:50:42.877247 | orchestrator | b6cac1d9df44 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-17 04:50:42.877258 | orchestrator | a685a15fa13d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-17 04:50:42.877270 | orchestrator | d8edf9d17c51 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-17 04:50:42.877281 | orchestrator | 765303e74535 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-17 04:50:42.877299 | orchestrator | cecc9ee38b5b registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-17 04:50:42.877311 | orchestrator | 5c731ccdac7d registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-17 04:50:42.877322 | orchestrator | a0a1ed629172 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-17 04:50:42.877373 | orchestrator | ea5752e14d46 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-17 04:50:42.877384 | orchestrator | 318263cfd47c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-17 04:50:42.877395 | orchestrator | 9877aa7c93a0 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-17 04:50:42.877406 | orchestrator | 86ac21cd0d5e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-17 04:50:42.877417 | orchestrator | 737c33f99058 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-17 04:50:42.877435 | orchestrator | 88e46365bf12 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-17 04:50:42.877447 | orchestrator | efa834ba8e35 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-17 04:50:43.210478 | orchestrator | 2026-02-17 04:50:43.210578 | orchestrator | ## Images @ testbed-node-2 2026-02-17 04:50:43.210595 | orchestrator | 2026-02-17 04:50:43.210608 | orchestrator | + echo 2026-02-17 04:50:43.210620 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-17 04:50:43.210632 | orchestrator | + echo 2026-02-17 04:50:43.210644 | orchestrator | + osism container testbed-node-2 images 2026-02-17 04:50:45.649056 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-17 04:50:45.649159 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-17 04:50:45.649174 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-17 04:50:45.649186 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-17 04:50:45.649214 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-17 04:50:45.649226 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-17 04:50:45.649237 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-17 04:50:45.649249 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-17 04:50:45.649260 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-17 04:50:45.649311 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-17 04:50:45.649323 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-17 04:50:45.649426 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-17 04:50:45.649439 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-17 04:50:45.649450 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-17 04:50:45.649461 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-17 04:50:45.649473 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-17 04:50:45.649484 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-17 04:50:45.649495 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-17 04:50:45.649506 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-17 04:50:45.649517 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-17 04:50:45.649528 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-17 04:50:45.649538 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-17 04:50:45.649549 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-17 04:50:45.649560 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-17 04:50:45.649571 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-17 04:50:45.649582 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-17 04:50:45.649595 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-17 04:50:45.649607 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-17 04:50:45.649620 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-17 04:50:45.649633 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-17 04:50:45.649645 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-17 04:50:45.649658 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-17 04:50:45.649689 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-17 04:50:45.649702 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-17 04:50:45.649715 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-17 04:50:45.649728 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-17 04:50:45.649750 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-17 04:50:45.649763 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-17 04:50:45.649776 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-17 04:50:45.649796 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-17 04:50:45.649809 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-17 04:50:45.649822 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-17 04:50:45.649834 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-17 04:50:45.649847 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-17 04:50:45.649859 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-17 04:50:45.649871 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-17 04:50:45.649883 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-17 04:50:45.649896 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-17 04:50:45.649908 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-17 04:50:45.649920 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-17 04:50:45.649933 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-17 04:50:45.649945 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-17 04:50:45.649957 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-17 04:50:45.649970 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-17 04:50:45.649982 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-17 04:50:45.649993 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-17 04:50:45.650004 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-17 04:50:45.650015 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-17 04:50:45.650086 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-17 04:50:45.650097 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-17 04:50:45.650108 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-17 04:50:45.650119 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-17 04:50:45.650137 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-17 04:50:45.650148 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-17 04:50:45.650167 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-17 04:50:45.650179 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-17 04:50:45.650190 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-17 04:50:45.650200 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-17 04:50:45.650216 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-17 04:50:45.650228 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-17 04:50:45.961807 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-17 04:50:45.971656 | orchestrator | + set -e 2026-02-17 04:50:45.971741 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 04:50:45.971755 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 04:50:45.971767 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 04:50:45.971777 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 04:50:45.971788 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 04:50:45.971799 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 04:50:45.971811 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 04:50:45.971822 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 04:50:45.971833 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 04:50:45.971843 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 04:50:45.971854 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 04:50:45.971865 | orchestrator | ++ export ARA=false 2026-02-17 04:50:45.971876 | orchestrator | ++ ARA=false 2026-02-17 04:50:45.971886 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 04:50:45.971897 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 04:50:45.971908 | orchestrator | ++ export TEMPEST=false 2026-02-17 04:50:45.971919 | orchestrator | ++ TEMPEST=false 2026-02-17 04:50:45.971930 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 04:50:45.971940 | orchestrator | ++ IS_ZUUL=true 2026-02-17 04:50:45.971951 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 04:50:45.971963 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 04:50:45.971973 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 04:50:45.971984 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 04:50:45.971995 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 04:50:45.972005 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 04:50:45.972018 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 04:50:45.972028 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 04:50:45.972039 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 04:50:45.972050 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 04:50:45.972061 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-17 04:50:45.972072 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-17 04:50:45.978715 | orchestrator | + set -e 2026-02-17 04:50:45.978780 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 04:50:45.978797 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 04:50:45.978811 | orchestrator | ++ INTERACTIVE=false 2026-02-17 04:50:45.978822 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 04:50:45.978833 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 04:50:45.978848 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-17 04:50:45.980072 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-17 04:50:45.986736 | orchestrator | 2026-02-17 04:50:45.986790 | orchestrator | # Ceph status 2026-02-17 04:50:45.986804 | orchestrator | 2026-02-17 04:50:45.986816 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 04:50:45.986828 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 04:50:45.986839 | orchestrator | + echo 2026-02-17 04:50:45.986850 | orchestrator | + echo '# Ceph status' 2026-02-17 04:50:45.986888 | orchestrator | + echo 2026-02-17 04:50:45.986899 | orchestrator | + ceph -s 2026-02-17 04:50:46.593189 | orchestrator | cluster: 2026-02-17 04:50:46.593304 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-17 04:50:46.593322 | orchestrator | health: HEALTH_OK 2026-02-17 04:50:46.593359 | orchestrator | 2026-02-17 04:50:46.593371 | orchestrator | services: 2026-02-17 04:50:46.593382 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 68m) 2026-02-17 04:50:46.593395 | orchestrator | mgr: testbed-node-2(active, since 55m), standbys: testbed-node-1, testbed-node-0 2026-02-17 04:50:46.593407 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-17 04:50:46.593419 | orchestrator | osd: 6 osds: 6 up (since 64m), 6 in (since 65m) 2026-02-17 04:50:46.593444 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-17 04:50:46.593455 | orchestrator | 2026-02-17 04:50:46.593466 | orchestrator | data: 2026-02-17 04:50:46.593489 | orchestrator | volumes: 1/1 healthy 2026-02-17 04:50:46.593500 | orchestrator | pools: 14 pools, 401 pgs 2026-02-17 04:50:46.593511 | orchestrator | objects: 555 objects, 2.2 GiB 2026-02-17 04:50:46.593522 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-02-17 04:50:46.593533 | orchestrator | pgs: 401 active+clean 2026-02-17 04:50:46.593544 | orchestrator | 2026-02-17 04:50:46.638729 | orchestrator | 2026-02-17 04:50:46.638829 | orchestrator | # Ceph versions 2026-02-17 04:50:46.638845 | orchestrator | 2026-02-17 04:50:46.638858 | orchestrator | + echo 2026-02-17 04:50:46.638870 | orchestrator | + echo '# Ceph versions' 2026-02-17 04:50:46.638882 | orchestrator | + echo 2026-02-17 04:50:46.638893 | orchestrator | + ceph versions 2026-02-17 04:50:47.251739 | orchestrator | { 2026-02-17 04:50:47.251840 | orchestrator | "mon": { 2026-02-17 04:50:47.251858 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-17 04:50:47.251872 | orchestrator | }, 2026-02-17 04:50:47.251884 | orchestrator | "mgr": { 2026-02-17 04:50:47.251895 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-17 04:50:47.251907 | orchestrator | }, 2026-02-17 04:50:47.251918 | orchestrator | "osd": { 2026-02-17 04:50:47.251929 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-17 04:50:47.251940 | orchestrator | }, 2026-02-17 04:50:47.251951 | orchestrator | "mds": { 2026-02-17 04:50:47.251962 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-17 04:50:47.251973 | orchestrator | }, 2026-02-17 04:50:47.251983 | orchestrator | "rgw": { 2026-02-17 04:50:47.251994 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-17 04:50:47.252005 | orchestrator | }, 2026-02-17 04:50:47.252016 | orchestrator | "overall": { 2026-02-17 04:50:47.252028 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-17 04:50:47.252039 | orchestrator | } 2026-02-17 04:50:47.252050 | orchestrator | } 2026-02-17 04:50:47.295820 | orchestrator | 2026-02-17 04:50:47.295936 | orchestrator | # Ceph OSD tree 2026-02-17 04:50:47.295950 | orchestrator | 2026-02-17 04:50:47.295961 | orchestrator | + echo 2026-02-17 04:50:47.295973 | orchestrator | + echo '# Ceph OSD tree' 2026-02-17 04:50:47.295985 | orchestrator | + echo 2026-02-17 04:50:47.295996 | orchestrator | + ceph osd df tree 2026-02-17 04:50:47.839124 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-17 04:50:47.839237 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 385 MiB 113 GiB 5.88 1.00 - root default 2026-02-17 04:50:47.839251 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-02-17 04:50:47.839263 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.79 0.98 199 up osd.0 2026-02-17 04:50:47.839274 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.95 1.01 193 up osd.5 2026-02-17 04:50:47.839286 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-02-17 04:50:47.839296 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.53 1.11 184 up osd.1 2026-02-17 04:50:47.839384 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 1003 MiB 1 KiB 62 MiB 19 GiB 5.20 0.89 204 up osd.3 2026-02-17 04:50:47.839398 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-02-17 04:50:47.839410 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 78 MiB 18 GiB 7.63 1.30 195 up osd.2 2026-02-17 04:50:47.839422 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 856 MiB 795 MiB 1 KiB 62 MiB 19 GiB 4.19 0.71 195 up osd.4 2026-02-17 04:50:47.839433 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 385 MiB 113 GiB 5.88 2026-02-17 04:50:47.839444 | orchestrator | MIN/MAX VAR: 0.71/1.30 STDDEV: 1.07 2026-02-17 04:50:47.886585 | orchestrator | 2026-02-17 04:50:47.886697 | orchestrator | # Ceph monitor status 2026-02-17 04:50:47.886713 | orchestrator | 2026-02-17 04:50:47.886724 | orchestrator | + echo 2026-02-17 04:50:47.886736 | orchestrator | + echo '# Ceph monitor status' 2026-02-17 04:50:47.886747 | orchestrator | + echo 2026-02-17 04:50:47.886759 | orchestrator | + ceph mon stat 2026-02-17 04:50:48.481463 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-17 04:50:48.524125 | orchestrator | 2026-02-17 04:50:48.524225 | orchestrator | # Ceph quorum status 2026-02-17 04:50:48.524241 | orchestrator | 2026-02-17 04:50:48.524254 | orchestrator | + echo 2026-02-17 04:50:48.524276 | orchestrator | + echo '# Ceph quorum status' 2026-02-17 04:50:48.524297 | orchestrator | + echo 2026-02-17 04:50:48.524624 | orchestrator | + ceph quorum_status 2026-02-17 04:50:48.524651 | orchestrator | + jq 2026-02-17 04:50:49.189712 | orchestrator | { 2026-02-17 04:50:49.189813 | orchestrator | "election_epoch": 8, 2026-02-17 04:50:49.189828 | orchestrator | "quorum": [ 2026-02-17 04:50:49.189840 | orchestrator | 0, 2026-02-17 04:50:49.189851 | orchestrator | 1, 2026-02-17 04:50:49.189862 | orchestrator | 2 2026-02-17 04:50:49.189872 | orchestrator | ], 2026-02-17 04:50:49.189883 | orchestrator | "quorum_names": [ 2026-02-17 04:50:49.189893 | orchestrator | "testbed-node-0", 2026-02-17 04:50:49.189904 | orchestrator | "testbed-node-1", 2026-02-17 04:50:49.189915 | orchestrator | "testbed-node-2" 2026-02-17 04:50:49.189926 | orchestrator | ], 2026-02-17 04:50:49.189937 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-17 04:50:49.189948 | orchestrator | "quorum_age": 4115, 2026-02-17 04:50:49.189959 | orchestrator | "features": { 2026-02-17 04:50:49.189969 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-17 04:50:49.189980 | orchestrator | "quorum_mon": [ 2026-02-17 04:50:49.189991 | orchestrator | "kraken", 2026-02-17 04:50:49.190001 | orchestrator | "luminous", 2026-02-17 04:50:49.190012 | orchestrator | "mimic", 2026-02-17 04:50:49.190132 | orchestrator | "osdmap-prune", 2026-02-17 04:50:49.190144 | orchestrator | "nautilus", 2026-02-17 04:50:49.190155 | orchestrator | "octopus", 2026-02-17 04:50:49.190166 | orchestrator | "pacific", 2026-02-17 04:50:49.190176 | orchestrator | "elector-pinging", 2026-02-17 04:50:49.190189 | orchestrator | "quincy", 2026-02-17 04:50:49.190207 | orchestrator | "reef" 2026-02-17 04:50:49.190226 | orchestrator | ] 2026-02-17 04:50:49.190246 | orchestrator | }, 2026-02-17 04:50:49.190268 | orchestrator | "monmap": { 2026-02-17 04:50:49.190288 | orchestrator | "epoch": 1, 2026-02-17 04:50:49.190305 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-17 04:50:49.190319 | orchestrator | "modified": "2026-02-17T03:41:56.507824Z", 2026-02-17 04:50:49.190355 | orchestrator | "created": "2026-02-17T03:41:56.507824Z", 2026-02-17 04:50:49.190368 | orchestrator | "min_mon_release": 18, 2026-02-17 04:50:49.190381 | orchestrator | "min_mon_release_name": "reef", 2026-02-17 04:50:49.190393 | orchestrator | "election_strategy": 1, 2026-02-17 04:50:49.190405 | orchestrator | "disallowed_leaders: ": "", 2026-02-17 04:50:49.190417 | orchestrator | "stretch_mode": false, 2026-02-17 04:50:49.190429 | orchestrator | "tiebreaker_mon": "", 2026-02-17 04:50:49.190441 | orchestrator | "removed_ranks: ": "", 2026-02-17 04:50:49.190453 | orchestrator | "features": { 2026-02-17 04:50:49.190465 | orchestrator | "persistent": [ 2026-02-17 04:50:49.190477 | orchestrator | "kraken", 2026-02-17 04:50:49.190516 | orchestrator | "luminous", 2026-02-17 04:50:49.190687 | orchestrator | "mimic", 2026-02-17 04:50:49.190700 | orchestrator | "osdmap-prune", 2026-02-17 04:50:49.190711 | orchestrator | "nautilus", 2026-02-17 04:50:49.190721 | orchestrator | "octopus", 2026-02-17 04:50:49.190732 | orchestrator | "pacific", 2026-02-17 04:50:49.190743 | orchestrator | "elector-pinging", 2026-02-17 04:50:49.190753 | orchestrator | "quincy", 2026-02-17 04:50:49.190764 | orchestrator | "reef" 2026-02-17 04:50:49.190775 | orchestrator | ], 2026-02-17 04:50:49.190785 | orchestrator | "optional": [] 2026-02-17 04:50:49.190796 | orchestrator | }, 2026-02-17 04:50:49.190807 | orchestrator | "mons": [ 2026-02-17 04:50:49.190835 | orchestrator | { 2026-02-17 04:50:49.190847 | orchestrator | "rank": 0, 2026-02-17 04:50:49.190858 | orchestrator | "name": "testbed-node-0", 2026-02-17 04:50:49.190869 | orchestrator | "public_addrs": { 2026-02-17 04:50:49.190879 | orchestrator | "addrvec": [ 2026-02-17 04:50:49.190890 | orchestrator | { 2026-02-17 04:50:49.190901 | orchestrator | "type": "v2", 2026-02-17 04:50:49.190912 | orchestrator | "addr": "192.168.16.8:3300", 2026-02-17 04:50:49.190922 | orchestrator | "nonce": 0 2026-02-17 04:50:49.190934 | orchestrator | }, 2026-02-17 04:50:49.190944 | orchestrator | { 2026-02-17 04:50:49.190955 | orchestrator | "type": "v1", 2026-02-17 04:50:49.190966 | orchestrator | "addr": "192.168.16.8:6789", 2026-02-17 04:50:49.190977 | orchestrator | "nonce": 0 2026-02-17 04:50:49.190987 | orchestrator | } 2026-02-17 04:50:49.190998 | orchestrator | ] 2026-02-17 04:50:49.191009 | orchestrator | }, 2026-02-17 04:50:49.191019 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-02-17 04:50:49.191030 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-02-17 04:50:49.191041 | orchestrator | "priority": 0, 2026-02-17 04:50:49.191052 | orchestrator | "weight": 0, 2026-02-17 04:50:49.191062 | orchestrator | "crush_location": "{}" 2026-02-17 04:50:49.191073 | orchestrator | }, 2026-02-17 04:50:49.191084 | orchestrator | { 2026-02-17 04:50:49.191095 | orchestrator | "rank": 1, 2026-02-17 04:50:49.191105 | orchestrator | "name": "testbed-node-1", 2026-02-17 04:50:49.191116 | orchestrator | "public_addrs": { 2026-02-17 04:50:49.191127 | orchestrator | "addrvec": [ 2026-02-17 04:50:49.191137 | orchestrator | { 2026-02-17 04:50:49.191148 | orchestrator | "type": "v2", 2026-02-17 04:50:49.191159 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-17 04:50:49.191170 | orchestrator | "nonce": 0 2026-02-17 04:50:49.191180 | orchestrator | }, 2026-02-17 04:50:49.191191 | orchestrator | { 2026-02-17 04:50:49.191202 | orchestrator | "type": "v1", 2026-02-17 04:50:49.191212 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-17 04:50:49.191223 | orchestrator | "nonce": 0 2026-02-17 04:50:49.191233 | orchestrator | } 2026-02-17 04:50:49.191244 | orchestrator | ] 2026-02-17 04:50:49.191255 | orchestrator | }, 2026-02-17 04:50:49.191266 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-17 04:50:49.191277 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-17 04:50:49.191288 | orchestrator | "priority": 0, 2026-02-17 04:50:49.191300 | orchestrator | "weight": 0, 2026-02-17 04:50:49.191312 | orchestrator | "crush_location": "{}" 2026-02-17 04:50:49.191324 | orchestrator | }, 2026-02-17 04:50:49.191359 | orchestrator | { 2026-02-17 04:50:49.191372 | orchestrator | "rank": 2, 2026-02-17 04:50:49.191384 | orchestrator | "name": "testbed-node-2", 2026-02-17 04:50:49.191396 | orchestrator | "public_addrs": { 2026-02-17 04:50:49.191408 | orchestrator | "addrvec": [ 2026-02-17 04:50:49.191420 | orchestrator | { 2026-02-17 04:50:49.191432 | orchestrator | "type": "v2", 2026-02-17 04:50:49.191444 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-17 04:50:49.191456 | orchestrator | "nonce": 0 2026-02-17 04:50:49.191468 | orchestrator | }, 2026-02-17 04:50:49.191480 | orchestrator | { 2026-02-17 04:50:49.191492 | orchestrator | "type": "v1", 2026-02-17 04:50:49.191504 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-17 04:50:49.191516 | orchestrator | "nonce": 0 2026-02-17 04:50:49.191528 | orchestrator | } 2026-02-17 04:50:49.191540 | orchestrator | ] 2026-02-17 04:50:49.191552 | orchestrator | }, 2026-02-17 04:50:49.191564 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-17 04:50:49.191577 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-17 04:50:49.191588 | orchestrator | "priority": 0, 2026-02-17 04:50:49.191610 | orchestrator | "weight": 0, 2026-02-17 04:50:49.191622 | orchestrator | "crush_location": "{}" 2026-02-17 04:50:49.191635 | orchestrator | } 2026-02-17 04:50:49.191647 | orchestrator | ] 2026-02-17 04:50:49.191657 | orchestrator | } 2026-02-17 04:50:49.191668 | orchestrator | } 2026-02-17 04:50:49.191693 | orchestrator | 2026-02-17 04:50:49.191704 | orchestrator | # Ceph free space status 2026-02-17 04:50:49.191715 | orchestrator | 2026-02-17 04:50:49.191726 | orchestrator | + echo 2026-02-17 04:50:49.191737 | orchestrator | + echo '# Ceph free space status' 2026-02-17 04:50:49.191747 | orchestrator | + echo 2026-02-17 04:50:49.191759 | orchestrator | + ceph df 2026-02-17 04:50:49.757601 | orchestrator | --- RAW STORAGE --- 2026-02-17 04:50:49.758779 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-17 04:50:49.758866 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.88 2026-02-17 04:50:49.758894 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.88 2026-02-17 04:50:49.758907 | orchestrator | 2026-02-17 04:50:49.758919 | orchestrator | --- POOLS --- 2026-02-17 04:50:49.758931 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-17 04:50:49.758944 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-17 04:50:49.758968 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-17 04:50:49.758980 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-17 04:50:49.758991 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-17 04:50:49.759002 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-17 04:50:49.759013 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-17 04:50:49.759024 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-17 04:50:49.759035 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-17 04:50:49.759046 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 52 GiB 2026-02-17 04:50:49.759057 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-17 04:50:49.759068 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-17 04:50:49.759079 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-02-17 04:50:49.759090 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-17 04:50:49.759101 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-17 04:50:49.803686 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-17 04:50:49.879119 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-17 04:50:49.879214 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-17 04:50:49.879230 | orchestrator | + osism apply facts 2026-02-17 04:51:02.092473 | orchestrator | 2026-02-17 04:51:02 | INFO  | Task 2182d43e-d06d-43ff-8cce-282d6fe9bb0a (facts) was prepared for execution. 2026-02-17 04:51:02.092592 | orchestrator | 2026-02-17 04:51:02 | INFO  | It takes a moment until task 2182d43e-d06d-43ff-8cce-282d6fe9bb0a (facts) has been started and output is visible here. 2026-02-17 04:51:15.731336 | orchestrator | 2026-02-17 04:51:15.731442 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-17 04:51:15.731449 | orchestrator | 2026-02-17 04:51:15.731454 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-17 04:51:15.731459 | orchestrator | Tuesday 17 February 2026 04:51:06 +0000 (0:00:00.276) 0:00:00.276 ****** 2026-02-17 04:51:15.731464 | orchestrator | ok: [testbed-manager] 2026-02-17 04:51:15.731469 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:51:15.731473 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:51:15.731477 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:15.731481 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:51:15.731485 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:51:15.731489 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:51:15.731493 | orchestrator | 2026-02-17 04:51:15.731497 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-17 04:51:15.731517 | orchestrator | Tuesday 17 February 2026 04:51:07 +0000 (0:00:01.153) 0:00:01.429 ****** 2026-02-17 04:51:15.731521 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:51:15.731526 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:15.731530 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:51:15.731534 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:51:15.731538 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:51:15.731542 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:51:15.731546 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:51:15.731550 | orchestrator | 2026-02-17 04:51:15.731554 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-17 04:51:15.731558 | orchestrator | 2026-02-17 04:51:15.731562 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-17 04:51:15.731572 | orchestrator | Tuesday 17 February 2026 04:51:09 +0000 (0:00:01.334) 0:00:02.764 ****** 2026-02-17 04:51:15.731577 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:51:15.731580 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:51:15.731584 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:15.731588 | orchestrator | ok: [testbed-manager] 2026-02-17 04:51:15.731598 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:51:15.731602 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:51:15.731606 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:51:15.731610 | orchestrator | 2026-02-17 04:51:15.731614 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-17 04:51:15.731618 | orchestrator | 2026-02-17 04:51:15.731622 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-17 04:51:15.731626 | orchestrator | Tuesday 17 February 2026 04:51:14 +0000 (0:00:05.761) 0:00:08.525 ****** 2026-02-17 04:51:15.731630 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:51:15.731634 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:15.731638 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:51:15.731641 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:51:15.731645 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:51:15.731649 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:51:15.731653 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:51:15.731657 | orchestrator | 2026-02-17 04:51:15.731661 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:51:15.731665 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:15.731670 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:15.731674 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:15.731688 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:15.731692 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:15.731696 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:15.731700 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:15.731704 | orchestrator | 2026-02-17 04:51:15.731708 | orchestrator | 2026-02-17 04:51:15.731712 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:51:15.731715 | orchestrator | Tuesday 17 February 2026 04:51:15 +0000 (0:00:00.560) 0:00:09.086 ****** 2026-02-17 04:51:15.731719 | orchestrator | =============================================================================== 2026-02-17 04:51:15.731723 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.76s 2026-02-17 04:51:15.731731 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-02-17 04:51:15.731735 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-02-17 04:51:15.731739 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-17 04:51:16.043472 | orchestrator | + osism validate ceph-mons 2026-02-17 04:51:38.319943 | orchestrator | 2026-02-17 04:51:38.320064 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-17 04:51:38.320092 | orchestrator | 2026-02-17 04:51:38.320113 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-17 04:51:38.320134 | orchestrator | Tuesday 17 February 2026 04:51:22 +0000 (0:00:00.511) 0:00:00.511 ****** 2026-02-17 04:51:38.320154 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:51:38.320169 | orchestrator | 2026-02-17 04:51:38.320187 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-17 04:51:38.320207 | orchestrator | Tuesday 17 February 2026 04:51:23 +0000 (0:00:00.899) 0:00:01.411 ****** 2026-02-17 04:51:38.320226 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:51:38.320243 | orchestrator | 2026-02-17 04:51:38.320254 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-17 04:51:38.320265 | orchestrator | Tuesday 17 February 2026 04:51:24 +0000 (0:00:00.975) 0:00:02.387 ****** 2026-02-17 04:51:38.320276 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.320288 | orchestrator | 2026-02-17 04:51:38.320299 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-17 04:51:38.320310 | orchestrator | Tuesday 17 February 2026 04:51:24 +0000 (0:00:00.129) 0:00:02.516 ****** 2026-02-17 04:51:38.320321 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.320332 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:51:38.320343 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:51:38.320354 | orchestrator | 2026-02-17 04:51:38.320365 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-17 04:51:38.320376 | orchestrator | Tuesday 17 February 2026 04:51:24 +0000 (0:00:00.286) 0:00:02.803 ****** 2026-02-17 04:51:38.320387 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:51:38.320471 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:51:38.320494 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.320508 | orchestrator | 2026-02-17 04:51:38.320520 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-17 04:51:38.320533 | orchestrator | Tuesday 17 February 2026 04:51:26 +0000 (0:00:01.102) 0:00:03.905 ****** 2026-02-17 04:51:38.320547 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.320560 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:51:38.320573 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:51:38.320585 | orchestrator | 2026-02-17 04:51:38.320599 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-17 04:51:38.320617 | orchestrator | Tuesday 17 February 2026 04:51:26 +0000 (0:00:00.275) 0:00:04.181 ****** 2026-02-17 04:51:38.320636 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.320654 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:51:38.320673 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:51:38.320692 | orchestrator | 2026-02-17 04:51:38.320710 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-17 04:51:38.320728 | orchestrator | Tuesday 17 February 2026 04:51:26 +0000 (0:00:00.511) 0:00:04.692 ****** 2026-02-17 04:51:38.320747 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.320765 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:51:38.320783 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:51:38.320800 | orchestrator | 2026-02-17 04:51:38.320817 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-17 04:51:38.320835 | orchestrator | Tuesday 17 February 2026 04:51:27 +0000 (0:00:00.311) 0:00:05.004 ****** 2026-02-17 04:51:38.320854 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.320905 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:51:38.320925 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:51:38.320936 | orchestrator | 2026-02-17 04:51:38.320947 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-17 04:51:38.320959 | orchestrator | Tuesday 17 February 2026 04:51:27 +0000 (0:00:00.298) 0:00:05.302 ****** 2026-02-17 04:51:38.320969 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.320980 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:51:38.320991 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:51:38.321002 | orchestrator | 2026-02-17 04:51:38.321013 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-17 04:51:38.321024 | orchestrator | Tuesday 17 February 2026 04:51:27 +0000 (0:00:00.472) 0:00:05.775 ****** 2026-02-17 04:51:38.321035 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.321046 | orchestrator | 2026-02-17 04:51:38.321057 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-17 04:51:38.321068 | orchestrator | Tuesday 17 February 2026 04:51:28 +0000 (0:00:00.279) 0:00:06.055 ****** 2026-02-17 04:51:38.321079 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.321092 | orchestrator | 2026-02-17 04:51:38.321111 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-17 04:51:38.321127 | orchestrator | Tuesday 17 February 2026 04:51:28 +0000 (0:00:00.258) 0:00:06.314 ****** 2026-02-17 04:51:38.321144 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.321160 | orchestrator | 2026-02-17 04:51:38.321176 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:51:38.321195 | orchestrator | Tuesday 17 February 2026 04:51:28 +0000 (0:00:00.244) 0:00:06.558 ****** 2026-02-17 04:51:38.321214 | orchestrator | 2026-02-17 04:51:38.321233 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:51:38.321251 | orchestrator | Tuesday 17 February 2026 04:51:28 +0000 (0:00:00.074) 0:00:06.633 ****** 2026-02-17 04:51:38.321269 | orchestrator | 2026-02-17 04:51:38.321281 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:51:38.321292 | orchestrator | Tuesday 17 February 2026 04:51:28 +0000 (0:00:00.071) 0:00:06.705 ****** 2026-02-17 04:51:38.321302 | orchestrator | 2026-02-17 04:51:38.321313 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-17 04:51:38.321324 | orchestrator | Tuesday 17 February 2026 04:51:28 +0000 (0:00:00.076) 0:00:06.781 ****** 2026-02-17 04:51:38.321335 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.321346 | orchestrator | 2026-02-17 04:51:38.321357 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-17 04:51:38.321387 | orchestrator | Tuesday 17 February 2026 04:51:29 +0000 (0:00:00.248) 0:00:07.029 ****** 2026-02-17 04:51:38.321428 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.321440 | orchestrator | 2026-02-17 04:51:38.321474 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-17 04:51:38.321486 | orchestrator | Tuesday 17 February 2026 04:51:29 +0000 (0:00:00.259) 0:00:07.289 ****** 2026-02-17 04:51:38.321497 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.321508 | orchestrator | 2026-02-17 04:51:38.321519 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-17 04:51:38.321530 | orchestrator | Tuesday 17 February 2026 04:51:29 +0000 (0:00:00.141) 0:00:07.430 ****** 2026-02-17 04:51:38.321541 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:51:38.321557 | orchestrator | 2026-02-17 04:51:38.321568 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-17 04:51:38.321580 | orchestrator | Tuesday 17 February 2026 04:51:31 +0000 (0:00:01.521) 0:00:08.952 ****** 2026-02-17 04:51:38.321590 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.321601 | orchestrator | 2026-02-17 04:51:38.321612 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-17 04:51:38.321623 | orchestrator | Tuesday 17 February 2026 04:51:31 +0000 (0:00:00.557) 0:00:09.510 ****** 2026-02-17 04:51:38.321646 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.321658 | orchestrator | 2026-02-17 04:51:38.321669 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-17 04:51:38.321680 | orchestrator | Tuesday 17 February 2026 04:51:31 +0000 (0:00:00.155) 0:00:09.665 ****** 2026-02-17 04:51:38.321690 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.321701 | orchestrator | 2026-02-17 04:51:38.321712 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-17 04:51:38.321723 | orchestrator | Tuesday 17 February 2026 04:51:32 +0000 (0:00:00.322) 0:00:09.987 ****** 2026-02-17 04:51:38.321734 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.321745 | orchestrator | 2026-02-17 04:51:38.321756 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-17 04:51:38.321768 | orchestrator | Tuesday 17 February 2026 04:51:32 +0000 (0:00:00.314) 0:00:10.301 ****** 2026-02-17 04:51:38.321779 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.321790 | orchestrator | 2026-02-17 04:51:38.321801 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-17 04:51:38.321812 | orchestrator | Tuesday 17 February 2026 04:51:32 +0000 (0:00:00.113) 0:00:10.415 ****** 2026-02-17 04:51:38.321823 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.321834 | orchestrator | 2026-02-17 04:51:38.321845 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-17 04:51:38.321856 | orchestrator | Tuesday 17 February 2026 04:51:32 +0000 (0:00:00.129) 0:00:10.545 ****** 2026-02-17 04:51:38.321867 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.321877 | orchestrator | 2026-02-17 04:51:38.321888 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-17 04:51:38.321899 | orchestrator | Tuesday 17 February 2026 04:51:32 +0000 (0:00:00.149) 0:00:10.694 ****** 2026-02-17 04:51:38.321910 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:51:38.321921 | orchestrator | 2026-02-17 04:51:38.321932 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-17 04:51:38.321943 | orchestrator | Tuesday 17 February 2026 04:51:34 +0000 (0:00:01.302) 0:00:11.996 ****** 2026-02-17 04:51:38.321954 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.321965 | orchestrator | 2026-02-17 04:51:38.321976 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-17 04:51:38.321987 | orchestrator | Tuesday 17 February 2026 04:51:34 +0000 (0:00:00.303) 0:00:12.299 ****** 2026-02-17 04:51:38.321998 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.322008 | orchestrator | 2026-02-17 04:51:38.322084 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-17 04:51:38.322099 | orchestrator | Tuesday 17 February 2026 04:51:34 +0000 (0:00:00.145) 0:00:12.445 ****** 2026-02-17 04:51:38.322110 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:51:38.322121 | orchestrator | 2026-02-17 04:51:38.322132 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-17 04:51:38.322143 | orchestrator | Tuesday 17 February 2026 04:51:34 +0000 (0:00:00.159) 0:00:12.605 ****** 2026-02-17 04:51:38.322154 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.322168 | orchestrator | 2026-02-17 04:51:38.322187 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-17 04:51:38.322206 | orchestrator | Tuesday 17 February 2026 04:51:34 +0000 (0:00:00.144) 0:00:12.749 ****** 2026-02-17 04:51:38.322234 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.322255 | orchestrator | 2026-02-17 04:51:38.322274 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-17 04:51:38.322293 | orchestrator | Tuesday 17 February 2026 04:51:35 +0000 (0:00:00.340) 0:00:13.090 ****** 2026-02-17 04:51:38.322304 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:51:38.322315 | orchestrator | 2026-02-17 04:51:38.322326 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-17 04:51:38.322337 | orchestrator | Tuesday 17 February 2026 04:51:35 +0000 (0:00:00.282) 0:00:13.372 ****** 2026-02-17 04:51:38.322357 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:51:38.322368 | orchestrator | 2026-02-17 04:51:38.322379 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-17 04:51:38.322390 | orchestrator | Tuesday 17 February 2026 04:51:35 +0000 (0:00:00.252) 0:00:13.625 ****** 2026-02-17 04:51:38.322427 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:51:38.322439 | orchestrator | 2026-02-17 04:51:38.322450 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-17 04:51:38.322461 | orchestrator | Tuesday 17 February 2026 04:51:37 +0000 (0:00:01.733) 0:00:15.358 ****** 2026-02-17 04:51:38.322471 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:51:38.322483 | orchestrator | 2026-02-17 04:51:38.322493 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-17 04:51:38.322504 | orchestrator | Tuesday 17 February 2026 04:51:37 +0000 (0:00:00.257) 0:00:15.616 ****** 2026-02-17 04:51:38.322515 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:51:38.322526 | orchestrator | 2026-02-17 04:51:38.322548 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:51:41.033174 | orchestrator | Tuesday 17 February 2026 04:51:38 +0000 (0:00:00.301) 0:00:15.917 ****** 2026-02-17 04:51:41.033281 | orchestrator | 2026-02-17 04:51:41.033298 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:51:41.033310 | orchestrator | Tuesday 17 February 2026 04:51:38 +0000 (0:00:00.073) 0:00:15.991 ****** 2026-02-17 04:51:41.033321 | orchestrator | 2026-02-17 04:51:41.033333 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:51:41.033344 | orchestrator | Tuesday 17 February 2026 04:51:38 +0000 (0:00:00.070) 0:00:16.061 ****** 2026-02-17 04:51:41.033355 | orchestrator | 2026-02-17 04:51:41.033366 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-17 04:51:41.033377 | orchestrator | Tuesday 17 February 2026 04:51:38 +0000 (0:00:00.074) 0:00:16.136 ****** 2026-02-17 04:51:41.033389 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:51:41.033441 | orchestrator | 2026-02-17 04:51:41.033453 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-17 04:51:41.033464 | orchestrator | Tuesday 17 February 2026 04:51:39 +0000 (0:00:01.530) 0:00:17.666 ****** 2026-02-17 04:51:41.033474 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-17 04:51:41.033486 | orchestrator |  "msg": [ 2026-02-17 04:51:41.033498 | orchestrator |  "Validator run completed.", 2026-02-17 04:51:41.033510 | orchestrator |  "You can find the report file here:", 2026-02-17 04:51:41.033522 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-17T04:51:23+00:00-report.json", 2026-02-17 04:51:41.033534 | orchestrator |  "on the following host:", 2026-02-17 04:51:41.033546 | orchestrator |  "testbed-manager" 2026-02-17 04:51:41.033569 | orchestrator |  ] 2026-02-17 04:51:41.033581 | orchestrator | } 2026-02-17 04:51:41.033603 | orchestrator | 2026-02-17 04:51:41.033614 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:51:41.033627 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-17 04:51:41.033640 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:41.033651 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:51:41.033662 | orchestrator | 2026-02-17 04:51:41.033673 | orchestrator | 2026-02-17 04:51:41.033685 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:51:41.033696 | orchestrator | Tuesday 17 February 2026 04:51:40 +0000 (0:00:00.847) 0:00:18.514 ****** 2026-02-17 04:51:41.033734 | orchestrator | =============================================================================== 2026-02-17 04:51:41.033747 | orchestrator | Aggregate test results step one ----------------------------------------- 1.73s 2026-02-17 04:51:41.033760 | orchestrator | Write report file ------------------------------------------------------- 1.53s 2026-02-17 04:51:41.033773 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.52s 2026-02-17 04:51:41.033786 | orchestrator | Gather status data ------------------------------------------------------ 1.30s 2026-02-17 04:51:41.033799 | orchestrator | Get container info ------------------------------------------------------ 1.10s 2026-02-17 04:51:41.033811 | orchestrator | Create report output directory ------------------------------------------ 0.98s 2026-02-17 04:51:41.033823 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2026-02-17 04:51:41.033835 | orchestrator | Print report file information ------------------------------------------- 0.85s 2026-02-17 04:51:41.033848 | orchestrator | Set quorum test data ---------------------------------------------------- 0.56s 2026-02-17 04:51:41.033860 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2026-02-17 04:51:41.033887 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.47s 2026-02-17 04:51:41.033900 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2026-02-17 04:51:41.033912 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-02-17 04:51:41.033925 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-02-17 04:51:41.033937 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-02-17 04:51:41.033950 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2026-02-17 04:51:41.033962 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2026-02-17 04:51:41.033975 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-02-17 04:51:41.033987 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-02-17 04:51:41.034000 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-02-17 04:51:41.347135 | orchestrator | + osism validate ceph-mgrs 2026-02-17 04:52:12.434375 | orchestrator | 2026-02-17 04:52:12.434542 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-17 04:52:12.434561 | orchestrator | 2026-02-17 04:52:12.434574 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-17 04:52:12.434586 | orchestrator | Tuesday 17 February 2026 04:51:58 +0000 (0:00:00.432) 0:00:00.432 ****** 2026-02-17 04:52:12.434598 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:12.434609 | orchestrator | 2026-02-17 04:52:12.434621 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-17 04:52:12.434632 | orchestrator | Tuesday 17 February 2026 04:51:58 +0000 (0:00:00.811) 0:00:01.244 ****** 2026-02-17 04:52:12.434643 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:12.434654 | orchestrator | 2026-02-17 04:52:12.434665 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-17 04:52:12.434676 | orchestrator | Tuesday 17 February 2026 04:51:59 +0000 (0:00:00.956) 0:00:02.200 ****** 2026-02-17 04:52:12.434688 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.434700 | orchestrator | 2026-02-17 04:52:12.434711 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-17 04:52:12.434722 | orchestrator | Tuesday 17 February 2026 04:51:59 +0000 (0:00:00.129) 0:00:02.330 ****** 2026-02-17 04:52:12.434733 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.434744 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:52:12.434755 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:52:12.434766 | orchestrator | 2026-02-17 04:52:12.434777 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-17 04:52:12.434788 | orchestrator | Tuesday 17 February 2026 04:52:00 +0000 (0:00:00.300) 0:00:02.630 ****** 2026-02-17 04:52:12.434822 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:52:12.434833 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:52:12.434844 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.434855 | orchestrator | 2026-02-17 04:52:12.434866 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-17 04:52:12.434878 | orchestrator | Tuesday 17 February 2026 04:52:01 +0000 (0:00:01.015) 0:00:03.645 ****** 2026-02-17 04:52:12.434889 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.434902 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:52:12.434915 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:52:12.434927 | orchestrator | 2026-02-17 04:52:12.434939 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-17 04:52:12.434952 | orchestrator | Tuesday 17 February 2026 04:52:01 +0000 (0:00:00.317) 0:00:03.963 ****** 2026-02-17 04:52:12.434965 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.434978 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:52:12.434990 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:52:12.435003 | orchestrator | 2026-02-17 04:52:12.435015 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-17 04:52:12.435028 | orchestrator | Tuesday 17 February 2026 04:52:02 +0000 (0:00:00.482) 0:00:04.446 ****** 2026-02-17 04:52:12.435040 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.435053 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:52:12.435065 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:52:12.435077 | orchestrator | 2026-02-17 04:52:12.435090 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-17 04:52:12.435103 | orchestrator | Tuesday 17 February 2026 04:52:02 +0000 (0:00:00.306) 0:00:04.752 ****** 2026-02-17 04:52:12.435116 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.435128 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:52:12.435141 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:52:12.435153 | orchestrator | 2026-02-17 04:52:12.435166 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-17 04:52:12.435178 | orchestrator | Tuesday 17 February 2026 04:52:02 +0000 (0:00:00.320) 0:00:05.073 ****** 2026-02-17 04:52:12.435191 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.435203 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:52:12.435215 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:52:12.435228 | orchestrator | 2026-02-17 04:52:12.435240 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-17 04:52:12.435253 | orchestrator | Tuesday 17 February 2026 04:52:03 +0000 (0:00:00.541) 0:00:05.614 ****** 2026-02-17 04:52:12.435264 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.435275 | orchestrator | 2026-02-17 04:52:12.435286 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-17 04:52:12.435297 | orchestrator | Tuesday 17 February 2026 04:52:03 +0000 (0:00:00.255) 0:00:05.869 ****** 2026-02-17 04:52:12.435308 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.435319 | orchestrator | 2026-02-17 04:52:12.435330 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-17 04:52:12.435341 | orchestrator | Tuesday 17 February 2026 04:52:03 +0000 (0:00:00.257) 0:00:06.127 ****** 2026-02-17 04:52:12.435352 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.435363 | orchestrator | 2026-02-17 04:52:12.435374 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:12.435385 | orchestrator | Tuesday 17 February 2026 04:52:03 +0000 (0:00:00.259) 0:00:06.386 ****** 2026-02-17 04:52:12.435396 | orchestrator | 2026-02-17 04:52:12.435407 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:12.435418 | orchestrator | Tuesday 17 February 2026 04:52:04 +0000 (0:00:00.073) 0:00:06.459 ****** 2026-02-17 04:52:12.435429 | orchestrator | 2026-02-17 04:52:12.435457 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:12.435469 | orchestrator | Tuesday 17 February 2026 04:52:04 +0000 (0:00:00.078) 0:00:06.538 ****** 2026-02-17 04:52:12.435487 | orchestrator | 2026-02-17 04:52:12.435498 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-17 04:52:12.435509 | orchestrator | Tuesday 17 February 2026 04:52:04 +0000 (0:00:00.080) 0:00:06.619 ****** 2026-02-17 04:52:12.435520 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.435531 | orchestrator | 2026-02-17 04:52:12.435542 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-17 04:52:12.435553 | orchestrator | Tuesday 17 February 2026 04:52:04 +0000 (0:00:00.248) 0:00:06.867 ****** 2026-02-17 04:52:12.435564 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.435575 | orchestrator | 2026-02-17 04:52:12.435604 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-17 04:52:12.435616 | orchestrator | Tuesday 17 February 2026 04:52:04 +0000 (0:00:00.254) 0:00:07.122 ****** 2026-02-17 04:52:12.435627 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.435638 | orchestrator | 2026-02-17 04:52:12.435649 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-17 04:52:12.435660 | orchestrator | Tuesday 17 February 2026 04:52:04 +0000 (0:00:00.134) 0:00:07.256 ****** 2026-02-17 04:52:12.435671 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:52:12.435681 | orchestrator | 2026-02-17 04:52:12.435693 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-17 04:52:12.435703 | orchestrator | Tuesday 17 February 2026 04:52:06 +0000 (0:00:01.988) 0:00:09.245 ****** 2026-02-17 04:52:12.435714 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.435725 | orchestrator | 2026-02-17 04:52:12.435753 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-17 04:52:12.435787 | orchestrator | Tuesday 17 February 2026 04:52:07 +0000 (0:00:00.462) 0:00:09.707 ****** 2026-02-17 04:52:12.435799 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.435810 | orchestrator | 2026-02-17 04:52:12.435821 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-17 04:52:12.435832 | orchestrator | Tuesday 17 February 2026 04:52:07 +0000 (0:00:00.323) 0:00:10.031 ****** 2026-02-17 04:52:12.435843 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.435853 | orchestrator | 2026-02-17 04:52:12.435864 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-17 04:52:12.435875 | orchestrator | Tuesday 17 February 2026 04:52:07 +0000 (0:00:00.147) 0:00:10.179 ****** 2026-02-17 04:52:12.435886 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:52:12.435897 | orchestrator | 2026-02-17 04:52:12.435908 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-17 04:52:12.435918 | orchestrator | Tuesday 17 February 2026 04:52:07 +0000 (0:00:00.150) 0:00:10.330 ****** 2026-02-17 04:52:12.435929 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:12.435940 | orchestrator | 2026-02-17 04:52:12.435951 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-17 04:52:12.435961 | orchestrator | Tuesday 17 February 2026 04:52:08 +0000 (0:00:00.265) 0:00:10.595 ****** 2026-02-17 04:52:12.435972 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:52:12.435983 | orchestrator | 2026-02-17 04:52:12.435994 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-17 04:52:12.436005 | orchestrator | Tuesday 17 February 2026 04:52:08 +0000 (0:00:00.263) 0:00:10.858 ****** 2026-02-17 04:52:12.436015 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:12.436026 | orchestrator | 2026-02-17 04:52:12.436037 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-17 04:52:12.436048 | orchestrator | Tuesday 17 February 2026 04:52:09 +0000 (0:00:01.260) 0:00:12.119 ****** 2026-02-17 04:52:12.436058 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:12.436069 | orchestrator | 2026-02-17 04:52:12.436080 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-17 04:52:12.436091 | orchestrator | Tuesday 17 February 2026 04:52:09 +0000 (0:00:00.246) 0:00:12.366 ****** 2026-02-17 04:52:12.436108 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:12.436120 | orchestrator | 2026-02-17 04:52:12.436130 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:12.436141 | orchestrator | Tuesday 17 February 2026 04:52:10 +0000 (0:00:00.280) 0:00:12.646 ****** 2026-02-17 04:52:12.436152 | orchestrator | 2026-02-17 04:52:12.436163 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:12.436174 | orchestrator | Tuesday 17 February 2026 04:52:10 +0000 (0:00:00.078) 0:00:12.725 ****** 2026-02-17 04:52:12.436185 | orchestrator | 2026-02-17 04:52:12.436196 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:12.436206 | orchestrator | Tuesday 17 February 2026 04:52:10 +0000 (0:00:00.078) 0:00:12.803 ****** 2026-02-17 04:52:12.436217 | orchestrator | 2026-02-17 04:52:12.436228 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-17 04:52:12.436239 | orchestrator | Tuesday 17 February 2026 04:52:10 +0000 (0:00:00.288) 0:00:13.092 ****** 2026-02-17 04:52:12.436250 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:12.436260 | orchestrator | 2026-02-17 04:52:12.436271 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-17 04:52:12.436282 | orchestrator | Tuesday 17 February 2026 04:52:12 +0000 (0:00:01.318) 0:00:14.410 ****** 2026-02-17 04:52:12.436293 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-17 04:52:12.436304 | orchestrator |  "msg": [ 2026-02-17 04:52:12.436315 | orchestrator |  "Validator run completed.", 2026-02-17 04:52:12.436331 | orchestrator |  "You can find the report file here:", 2026-02-17 04:52:12.436342 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-17T04:51:58+00:00-report.json", 2026-02-17 04:52:12.436354 | orchestrator |  "on the following host:", 2026-02-17 04:52:12.436365 | orchestrator |  "testbed-manager" 2026-02-17 04:52:12.436376 | orchestrator |  ] 2026-02-17 04:52:12.436387 | orchestrator | } 2026-02-17 04:52:12.436398 | orchestrator | 2026-02-17 04:52:12.436409 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:52:12.436421 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-17 04:52:12.436459 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:52:12.436480 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:52:12.783090 | orchestrator | 2026-02-17 04:52:12.783187 | orchestrator | 2026-02-17 04:52:12.783203 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:52:12.783217 | orchestrator | Tuesday 17 February 2026 04:52:12 +0000 (0:00:00.408) 0:00:14.818 ****** 2026-02-17 04:52:12.783228 | orchestrator | =============================================================================== 2026-02-17 04:52:12.783240 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.99s 2026-02-17 04:52:12.783251 | orchestrator | Write report file ------------------------------------------------------- 1.32s 2026-02-17 04:52:12.783261 | orchestrator | Aggregate test results step one ----------------------------------------- 1.26s 2026-02-17 04:52:12.783272 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2026-02-17 04:52:12.783283 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2026-02-17 04:52:12.783293 | orchestrator | Get timestamp for report file ------------------------------------------- 0.81s 2026-02-17 04:52:12.783304 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.54s 2026-02-17 04:52:12.783315 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2026-02-17 04:52:12.783350 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.46s 2026-02-17 04:52:12.783362 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2026-02-17 04:52:12.783373 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-02-17 04:52:12.783384 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-02-17 04:52:12.783394 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.32s 2026-02-17 04:52:12.783405 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-02-17 04:52:12.783416 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-02-17 04:52:12.783427 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-17 04:52:12.783484 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-02-17 04:52:12.783496 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2026-02-17 04:52:12.783506 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2026-02-17 04:52:12.783517 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-02-17 04:52:13.111750 | orchestrator | + osism validate ceph-osds 2026-02-17 04:52:33.964152 | orchestrator | 2026-02-17 04:52:33.964298 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-17 04:52:33.964324 | orchestrator | 2026-02-17 04:52:33.964345 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-17 04:52:33.964366 | orchestrator | Tuesday 17 February 2026 04:52:29 +0000 (0:00:00.428) 0:00:00.428 ****** 2026-02-17 04:52:33.964386 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:33.964406 | orchestrator | 2026-02-17 04:52:33.964426 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-17 04:52:33.964445 | orchestrator | Tuesday 17 February 2026 04:52:30 +0000 (0:00:00.800) 0:00:01.229 ****** 2026-02-17 04:52:33.964516 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:33.964538 | orchestrator | 2026-02-17 04:52:33.964558 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-17 04:52:33.964578 | orchestrator | Tuesday 17 February 2026 04:52:30 +0000 (0:00:00.489) 0:00:01.718 ****** 2026-02-17 04:52:33.964596 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:33.964615 | orchestrator | 2026-02-17 04:52:33.964634 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-17 04:52:33.964652 | orchestrator | Tuesday 17 February 2026 04:52:31 +0000 (0:00:00.703) 0:00:02.421 ****** 2026-02-17 04:52:33.964672 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:33.964694 | orchestrator | 2026-02-17 04:52:33.964715 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-17 04:52:33.964736 | orchestrator | Tuesday 17 February 2026 04:52:31 +0000 (0:00:00.144) 0:00:02.565 ****** 2026-02-17 04:52:33.964757 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:33.964777 | orchestrator | 2026-02-17 04:52:33.964796 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-17 04:52:33.964815 | orchestrator | Tuesday 17 February 2026 04:52:31 +0000 (0:00:00.133) 0:00:02.699 ****** 2026-02-17 04:52:33.964833 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:33.964851 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:33.964870 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:33.964890 | orchestrator | 2026-02-17 04:52:33.964931 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-17 04:52:33.964952 | orchestrator | Tuesday 17 February 2026 04:52:32 +0000 (0:00:00.296) 0:00:02.995 ****** 2026-02-17 04:52:33.964969 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:33.964987 | orchestrator | 2026-02-17 04:52:33.965005 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-17 04:52:33.965043 | orchestrator | Tuesday 17 February 2026 04:52:32 +0000 (0:00:00.152) 0:00:03.148 ****** 2026-02-17 04:52:33.965054 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:33.965065 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:33.965076 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:33.965087 | orchestrator | 2026-02-17 04:52:33.965098 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-17 04:52:33.965109 | orchestrator | Tuesday 17 February 2026 04:52:32 +0000 (0:00:00.332) 0:00:03.481 ****** 2026-02-17 04:52:33.965120 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:33.965131 | orchestrator | 2026-02-17 04:52:33.965142 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-17 04:52:33.965153 | orchestrator | Tuesday 17 February 2026 04:52:33 +0000 (0:00:00.749) 0:00:04.230 ****** 2026-02-17 04:52:33.965163 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:33.965174 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:33.965185 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:33.965195 | orchestrator | 2026-02-17 04:52:33.965206 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-17 04:52:33.965217 | orchestrator | Tuesday 17 February 2026 04:52:33 +0000 (0:00:00.313) 0:00:04.544 ****** 2026-02-17 04:52:33.965231 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69b810790ad8404aabe8080b5862d9a4f0e6cb496ba58da5ac2fe4734f91a19b', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-17 04:52:33.965247 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c2464febc6b50a27e42d23f20c38c46381fca4a3362cf3efbcecec3f4e2670e', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-17 04:52:33.965259 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8733afb3f55ad7b654e28170617019fab384124053b8a022595d32df8ff3005e', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-17 04:52:33.965271 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4481230ace9e9b34d6d438cb5b9106dac3d58a5f8891470d21ca68ca4f22b04f', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-17 04:52:33.965282 | orchestrator | skipping: [testbed-node-3] => (item={'id': '85c2b03b7486db19b4ca971068f2695e0ff62d6e9cd2203fa8c862eea5680e2b', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-17 04:52:33.965325 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'caec32ffd5293c63bce3fe577025bf4e7ec4c54159f261e7cfd510809e5d9189', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-17 04:52:33.965338 | orchestrator | skipping: [testbed-node-3] => (item={'id': '179ef4f7e8eaee6757591dc412033784a6d8f3ef598f0502f724734c743a9377', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-17 04:52:33.965349 | orchestrator | skipping: [testbed-node-3] => (item={'id': '26abdf4c36a6bb2b9d880161488cd5e2e27697a8827a3763ef72565412a22538', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-17 04:52:33.965360 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0b137ca3a1c2fa3d5f91428a5b84990dc93ab07b015ee71b9db8abe240588acf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:33.965380 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0032e6830f2a65be4b316630c8e284484921785fcdebfa9af3d47fb71e0d4dd5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:33.965391 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5048c9f6a4fb09a2ca6b88c4da9ce5472c56ef2752ffac9f46714e99156e523a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:33.965403 | orchestrator | ok: [testbed-node-3] => (item={'id': '56e9d2e2fe653b8503cdd944be8abb566591aea621269fe19f2b8517d69d3746', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-17 04:52:33.965414 | orchestrator | ok: [testbed-node-3] => (item={'id': '1b56c624be8c7de5759f78d099406963f145a6ed005837e5e9bd8734addc53cd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-17 04:52:33.965424 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b0a80690b6a2b8f4b3d8ab8e1a7c973d7a499328a755cede96e5dfd907074cc4', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:33.965434 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9dc8d64322d236b4237ba88bbe867a94bb0c460b0fe91c7ed8812d2f666e4794', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-17 04:52:33.965444 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7706f543f8f6bb855655a8903e03da294ccaf48d4bf1944a23ce4810b56d2508', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-17 04:52:33.965455 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e180e803762bb948568236c6353a7f27c3361d6101fe54f8f74ed44dee35ad83', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:33.965496 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f006f39f4287d91639aa81390a50a9d41d67da4bfaf856ce5ac24c08a8bd5af9', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:33.965512 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8a3230824ccb58183918f184907796a7327d5e35fbbf3fc3eff761960a857d2d', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:33.965523 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'faeb3d5d2c545d7da1cee0db80723e514a9aed78d470235bbd8987659f3f0e61', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-17 04:52:33.965541 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1aa1944ad2bc3c95ab3d4b3a33d169d08b310ea0d57a613868baf29870775028', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-17 04:52:34.236192 | orchestrator | skipping: [testbed-node-4] => (item={'id': '02b3d54d439a520e02b1ebbc099ca5628c8636a8a3b4359ac52c4f3f128bbf8f', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-17 04:52:34.236345 | orchestrator | skipping: [testbed-node-4] => (item={'id': '54507cdd153565910bad2a9e295e650732f6d9c5ea75930f67013189f8d07785', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-17 04:52:34.236387 | orchestrator | skipping: [testbed-node-4] => (item={'id': '96494ff59caa21934642f1aeed3f4ba9f47ce247504ebb354ecde3fde5108c89', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-17 04:52:34.236403 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b1cafa8a7ed184a8ece0680d3c94833873cafda715d077e7545b906e011c38ad', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-17 04:52:34.236421 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dbfe626af9a569d2ccaaf22710576f0a88d5ac212397cbfbe369b1c56bfd3402', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-17 04:52:34.236433 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90c049a73ee2b9874abafdb15f82f8b75a889fd40d75d323be8ed80c132da6cf', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-17 04:52:34.236446 | orchestrator | skipping: [testbed-node-4] => (item={'id': '46fae098eee005f19533f8746c5ff07682015afa57f823fb9cbe9ed449e66a20', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:34.236459 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c54ed5607f3a9a5d40a970ae92dbad89a5ce1d4003d4d013dc0b1989d1febe3d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:34.236564 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0d4774a72bc26d42cffcac9717c9bc635dd84eefb5b15e4c0d79d19d414d6feb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:34.236579 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b084c5928f30dc12a02136db599d885aa6335981aac54dd521e63578731ff268', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-17 04:52:34.236592 | orchestrator | ok: [testbed-node-4] => (item={'id': '842812e88add09fa810f3d5c0fb4c976de63dc38cf7e6656fd065521c8605aa9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-17 04:52:34.236605 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e5ed19c701f8aae1e1fca53f2b4140c7b3bde301b64a22f8453ae2e48969a14d', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:34.236617 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c2102e4eabe8f4b1fcd819f6a33adc4a2bfb4cfbb66fb5bcf92d97369658a5a6', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-17 04:52:34.236630 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e64da67cf6a01c0b7679c83188b3732603cd9a1204a744f72328a6f11a36a8c9', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-17 04:52:34.236661 | orchestrator | skipping: [testbed-node-4] => (item={'id': '856825d7dad088cb4a04595483d2418b62ec15456c7c6cca36d33f5251c8e5f2', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:34.236684 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9c18112a07d930cd1546a61cc2b47223f28b911f9b1796abc364a05f5102623b', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:34.236697 | orchestrator | skipping: [testbed-node-4] => (item={'id': '894b904c10abf37481f20268d02257351a2b1bd93f5a75aa0c6a41800fc3dfdc', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:34.236710 | orchestrator | skipping: [testbed-node-5] => (item={'id': '86c484df38e34ae393bfd76c29aba9c56b6a8d3c82f8b624920a634da5ac9327', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-17 04:52:34.236722 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0ff227862cc309ce205ae95565a3135aa09f327fa877dc78f7cf0fcd2af18838', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-17 04:52:34.236740 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f93f47c3d814103b403a9b23cc11e3a93028c0c0e2277600c81e9fab6df1420', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-17 04:52:34.236753 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eea271a040341740ced9ea6201d156836defe09cb7ba71fd6ee8e98e01f18750', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-17 04:52:34.236765 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6cc2cc3285c6efd07e9cf3191f39398ced1250f0360a9436e87d70641e0ab652', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-17 04:52:34.236776 | orchestrator | skipping: [testbed-node-5] => (item={'id': '024751fa776ad7438982980b2c748df59f4c6c98eaa7f6806d6d033c380190ca', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-17 04:52:34.236789 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2853f22dd02cb5d532a85d91b4a484c2d4ac0b47d381fb431b3526896ba8aa95', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-17 04:52:34.236801 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2461e23a88532b906bd0c6784d68172a45ec3e8adafd4e949bfd73e6889de65', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-17 04:52:34.236813 | orchestrator | skipping: [testbed-node-5] => (item={'id': '22e77ec2ebcf7af301135d48cc2f7d2acfab5be9a4cdbb48f6fac0875d4cf21d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:34.236826 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cbcb408a5fcfdddeb40ff667820eb4f018630ad65c4339c823cc938680ec4335', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:34.236838 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9eaf3cbfc780e3af1df3aa4826c1a21c9a2c2f500eee1de6362b8db3441f3b71', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:34.236857 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ab09fe7f3a953a2fbb7dc812b0c7e727649375eb8729dd2977db77b92ec99a56', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-17 04:52:34.236878 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b82082b2a6e42a049eb961b7c5dcdc4cf5253ca4fc48ca76365f86ec4796e400', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-17 04:52:45.411155 | orchestrator | skipping: [testbed-node-5] => (item={'id': '349fbcca35c49e8d3b05e0c9ca3f20c7bea28acd561d5e49fce26f481b65d8c1', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-17 04:52:45.411308 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bd2c219bbf9e7a4a6dcaa3f538af2ce6e8a2a00739481f5d2f5717de2b2ef6f7', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-17 04:52:45.411337 | orchestrator | skipping: [testbed-node-5] => (item={'id': '39f8a6e98037fda9b590af5cc2ae0496611192160287b76a2d90a18f49fa1d0a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-17 04:52:45.411360 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c76974d74eb67357097ed6825d67550ffc51408bb127b696fc41b0479c5a0852', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:45.411400 | orchestrator | skipping: [testbed-node-5] => (item={'id': '18b88a054d87e2625e718be36daa22fdfe015fa15f10ae0dd2cd5afe65b213c1', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:45.411420 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb11acf710d6ea70c940e007fd56c60e46450dadb2c69e03a695e5adfa89f5d9', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-17 04:52:45.411439 | orchestrator | 2026-02-17 04:52:45.411459 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-17 04:52:45.411545 | orchestrator | Tuesday 17 February 2026 04:52:34 +0000 (0:00:00.506) 0:00:05.051 ****** 2026-02-17 04:52:45.411568 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.411589 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:45.411610 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:45.411631 | orchestrator | 2026-02-17 04:52:45.411652 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-17 04:52:45.411670 | orchestrator | Tuesday 17 February 2026 04:52:34 +0000 (0:00:00.304) 0:00:05.355 ****** 2026-02-17 04:52:45.411684 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.411698 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:45.411710 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:45.411722 | orchestrator | 2026-02-17 04:52:45.411735 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-17 04:52:45.411748 | orchestrator | Tuesday 17 February 2026 04:52:35 +0000 (0:00:00.486) 0:00:05.842 ****** 2026-02-17 04:52:45.411761 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.411774 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:45.411786 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:45.411799 | orchestrator | 2026-02-17 04:52:45.411817 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-17 04:52:45.411835 | orchestrator | Tuesday 17 February 2026 04:52:35 +0000 (0:00:00.303) 0:00:06.145 ****** 2026-02-17 04:52:45.411852 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.411870 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:45.411919 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:45.411939 | orchestrator | 2026-02-17 04:52:45.411957 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-17 04:52:45.411971 | orchestrator | Tuesday 17 February 2026 04:52:35 +0000 (0:00:00.283) 0:00:06.429 ****** 2026-02-17 04:52:45.411983 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-17 04:52:45.411998 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-17 04:52:45.412011 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.412024 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-17 04:52:45.412035 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-17 04:52:45.412046 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:45.412057 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-17 04:52:45.412068 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-17 04:52:45.412078 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:45.412089 | orchestrator | 2026-02-17 04:52:45.412100 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-17 04:52:45.412111 | orchestrator | Tuesday 17 February 2026 04:52:35 +0000 (0:00:00.322) 0:00:06.751 ****** 2026-02-17 04:52:45.412122 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.412133 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:45.412144 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:45.412154 | orchestrator | 2026-02-17 04:52:45.412165 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-17 04:52:45.412176 | orchestrator | Tuesday 17 February 2026 04:52:36 +0000 (0:00:00.491) 0:00:07.242 ****** 2026-02-17 04:52:45.412187 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.412224 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:45.412243 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:45.412261 | orchestrator | 2026-02-17 04:52:45.412279 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-17 04:52:45.412297 | orchestrator | Tuesday 17 February 2026 04:52:36 +0000 (0:00:00.288) 0:00:07.530 ****** 2026-02-17 04:52:45.412314 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.412334 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:45.412352 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:45.412370 | orchestrator | 2026-02-17 04:52:45.412389 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-17 04:52:45.412406 | orchestrator | Tuesday 17 February 2026 04:52:37 +0000 (0:00:00.308) 0:00:07.839 ****** 2026-02-17 04:52:45.412425 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.412443 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:45.412462 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:45.412507 | orchestrator | 2026-02-17 04:52:45.412527 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-17 04:52:45.412546 | orchestrator | Tuesday 17 February 2026 04:52:37 +0000 (0:00:00.323) 0:00:08.163 ****** 2026-02-17 04:52:45.412564 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.412581 | orchestrator | 2026-02-17 04:52:45.412592 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-17 04:52:45.412603 | orchestrator | Tuesday 17 February 2026 04:52:37 +0000 (0:00:00.648) 0:00:08.812 ****** 2026-02-17 04:52:45.412613 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.412624 | orchestrator | 2026-02-17 04:52:45.412635 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-17 04:52:45.412646 | orchestrator | Tuesday 17 February 2026 04:52:38 +0000 (0:00:00.278) 0:00:09.091 ****** 2026-02-17 04:52:45.412657 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.412668 | orchestrator | 2026-02-17 04:52:45.412679 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:45.412702 | orchestrator | Tuesday 17 February 2026 04:52:38 +0000 (0:00:00.245) 0:00:09.336 ****** 2026-02-17 04:52:45.412713 | orchestrator | 2026-02-17 04:52:45.412724 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:45.412735 | orchestrator | Tuesday 17 February 2026 04:52:38 +0000 (0:00:00.068) 0:00:09.404 ****** 2026-02-17 04:52:45.412747 | orchestrator | 2026-02-17 04:52:45.412758 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:45.412769 | orchestrator | Tuesday 17 February 2026 04:52:38 +0000 (0:00:00.070) 0:00:09.475 ****** 2026-02-17 04:52:45.412779 | orchestrator | 2026-02-17 04:52:45.412790 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-17 04:52:45.412801 | orchestrator | Tuesday 17 February 2026 04:52:38 +0000 (0:00:00.071) 0:00:09.546 ****** 2026-02-17 04:52:45.412812 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.412823 | orchestrator | 2026-02-17 04:52:45.412834 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-17 04:52:45.412845 | orchestrator | Tuesday 17 February 2026 04:52:38 +0000 (0:00:00.249) 0:00:09.796 ****** 2026-02-17 04:52:45.412855 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.412869 | orchestrator | 2026-02-17 04:52:45.412887 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-17 04:52:45.412907 | orchestrator | Tuesday 17 February 2026 04:52:39 +0000 (0:00:00.253) 0:00:10.049 ****** 2026-02-17 04:52:45.412924 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.412944 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:45.412963 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:45.412983 | orchestrator | 2026-02-17 04:52:45.413000 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-17 04:52:45.413017 | orchestrator | Tuesday 17 February 2026 04:52:39 +0000 (0:00:00.297) 0:00:10.346 ****** 2026-02-17 04:52:45.413028 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.413039 | orchestrator | 2026-02-17 04:52:45.413055 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-17 04:52:45.413073 | orchestrator | Tuesday 17 February 2026 04:52:40 +0000 (0:00:00.664) 0:00:11.011 ****** 2026-02-17 04:52:45.413091 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 04:52:45.413109 | orchestrator | 2026-02-17 04:52:45.413126 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-17 04:52:45.413144 | orchestrator | Tuesday 17 February 2026 04:52:41 +0000 (0:00:01.552) 0:00:12.564 ****** 2026-02-17 04:52:45.413162 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.413181 | orchestrator | 2026-02-17 04:52:45.413198 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-17 04:52:45.413213 | orchestrator | Tuesday 17 February 2026 04:52:41 +0000 (0:00:00.148) 0:00:12.712 ****** 2026-02-17 04:52:45.413231 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.413250 | orchestrator | 2026-02-17 04:52:45.413268 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-17 04:52:45.413285 | orchestrator | Tuesday 17 February 2026 04:52:42 +0000 (0:00:00.313) 0:00:13.026 ****** 2026-02-17 04:52:45.413302 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:45.413321 | orchestrator | 2026-02-17 04:52:45.413339 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-17 04:52:45.413358 | orchestrator | Tuesday 17 February 2026 04:52:42 +0000 (0:00:00.131) 0:00:13.158 ****** 2026-02-17 04:52:45.413375 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.413394 | orchestrator | 2026-02-17 04:52:45.413412 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-17 04:52:45.413431 | orchestrator | Tuesday 17 February 2026 04:52:42 +0000 (0:00:00.144) 0:00:13.303 ****** 2026-02-17 04:52:45.413446 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:45.413462 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:45.413521 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:45.413556 | orchestrator | 2026-02-17 04:52:45.413574 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-17 04:52:45.413592 | orchestrator | Tuesday 17 February 2026 04:52:42 +0000 (0:00:00.341) 0:00:13.644 ****** 2026-02-17 04:52:45.413612 | orchestrator | changed: [testbed-node-3] 2026-02-17 04:52:45.413631 | orchestrator | changed: [testbed-node-4] 2026-02-17 04:52:45.413649 | orchestrator | changed: [testbed-node-5] 2026-02-17 04:52:55.731708 | orchestrator | 2026-02-17 04:52:55.731865 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-17 04:52:55.731885 | orchestrator | Tuesday 17 February 2026 04:52:45 +0000 (0:00:02.588) 0:00:16.233 ****** 2026-02-17 04:52:55.732786 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:55.732809 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:55.732823 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:55.732836 | orchestrator | 2026-02-17 04:52:55.732847 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-17 04:52:55.732859 | orchestrator | Tuesday 17 February 2026 04:52:45 +0000 (0:00:00.310) 0:00:16.544 ****** 2026-02-17 04:52:55.732870 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:55.732881 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:55.732892 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:55.732903 | orchestrator | 2026-02-17 04:52:55.732915 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-17 04:52:55.732926 | orchestrator | Tuesday 17 February 2026 04:52:46 +0000 (0:00:00.532) 0:00:17.076 ****** 2026-02-17 04:52:55.732938 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:55.732950 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:55.732961 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:55.732972 | orchestrator | 2026-02-17 04:52:55.732983 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-17 04:52:55.732995 | orchestrator | Tuesday 17 February 2026 04:52:46 +0000 (0:00:00.304) 0:00:17.380 ****** 2026-02-17 04:52:55.733006 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:55.733017 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:55.733028 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:55.733039 | orchestrator | 2026-02-17 04:52:55.733050 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-17 04:52:55.733068 | orchestrator | Tuesday 17 February 2026 04:52:47 +0000 (0:00:00.538) 0:00:17.919 ****** 2026-02-17 04:52:55.733080 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:55.733091 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:55.733102 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:55.733113 | orchestrator | 2026-02-17 04:52:55.733125 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-17 04:52:55.733137 | orchestrator | Tuesday 17 February 2026 04:52:47 +0000 (0:00:00.328) 0:00:18.248 ****** 2026-02-17 04:52:55.733148 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:55.733159 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:55.733170 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:55.733181 | orchestrator | 2026-02-17 04:52:55.733192 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-17 04:52:55.733204 | orchestrator | Tuesday 17 February 2026 04:52:47 +0000 (0:00:00.310) 0:00:18.559 ****** 2026-02-17 04:52:55.733215 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:55.733226 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:55.733237 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:55.733248 | orchestrator | 2026-02-17 04:52:55.733259 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-17 04:52:55.733270 | orchestrator | Tuesday 17 February 2026 04:52:48 +0000 (0:00:00.569) 0:00:19.128 ****** 2026-02-17 04:52:55.733281 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:55.733293 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:55.733303 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:55.733314 | orchestrator | 2026-02-17 04:52:55.733325 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-17 04:52:55.733360 | orchestrator | Tuesday 17 February 2026 04:52:49 +0000 (0:00:00.761) 0:00:19.890 ****** 2026-02-17 04:52:55.733371 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:55.733382 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:55.733393 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:55.733404 | orchestrator | 2026-02-17 04:52:55.733414 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-17 04:52:55.733425 | orchestrator | Tuesday 17 February 2026 04:52:49 +0000 (0:00:00.325) 0:00:20.215 ****** 2026-02-17 04:52:55.733436 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:55.733447 | orchestrator | skipping: [testbed-node-4] 2026-02-17 04:52:55.733458 | orchestrator | skipping: [testbed-node-5] 2026-02-17 04:52:55.733469 | orchestrator | 2026-02-17 04:52:55.733480 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-17 04:52:55.733524 | orchestrator | Tuesday 17 February 2026 04:52:49 +0000 (0:00:00.313) 0:00:20.529 ****** 2026-02-17 04:52:55.733544 | orchestrator | ok: [testbed-node-3] 2026-02-17 04:52:55.733560 | orchestrator | ok: [testbed-node-4] 2026-02-17 04:52:55.733576 | orchestrator | ok: [testbed-node-5] 2026-02-17 04:52:55.733594 | orchestrator | 2026-02-17 04:52:55.733612 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-17 04:52:55.733631 | orchestrator | Tuesday 17 February 2026 04:52:50 +0000 (0:00:00.525) 0:00:21.055 ****** 2026-02-17 04:52:55.733650 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:55.733669 | orchestrator | 2026-02-17 04:52:55.733687 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-17 04:52:55.733700 | orchestrator | Tuesday 17 February 2026 04:52:50 +0000 (0:00:00.296) 0:00:21.352 ****** 2026-02-17 04:52:55.733711 | orchestrator | skipping: [testbed-node-3] 2026-02-17 04:52:55.733722 | orchestrator | 2026-02-17 04:52:55.733732 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-17 04:52:55.733743 | orchestrator | Tuesday 17 February 2026 04:52:50 +0000 (0:00:00.281) 0:00:21.633 ****** 2026-02-17 04:52:55.733754 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:55.733765 | orchestrator | 2026-02-17 04:52:55.733776 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-17 04:52:55.733787 | orchestrator | Tuesday 17 February 2026 04:52:52 +0000 (0:00:01.688) 0:00:23.322 ****** 2026-02-17 04:52:55.733797 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:55.733808 | orchestrator | 2026-02-17 04:52:55.733820 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-17 04:52:55.733830 | orchestrator | Tuesday 17 February 2026 04:52:52 +0000 (0:00:00.305) 0:00:23.628 ****** 2026-02-17 04:52:55.733841 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:55.733852 | orchestrator | 2026-02-17 04:52:55.733883 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:55.733895 | orchestrator | Tuesday 17 February 2026 04:52:53 +0000 (0:00:00.265) 0:00:23.893 ****** 2026-02-17 04:52:55.733906 | orchestrator | 2026-02-17 04:52:55.733917 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:55.733928 | orchestrator | Tuesday 17 February 2026 04:52:53 +0000 (0:00:00.076) 0:00:23.969 ****** 2026-02-17 04:52:55.733939 | orchestrator | 2026-02-17 04:52:55.733949 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-17 04:52:55.733961 | orchestrator | Tuesday 17 February 2026 04:52:53 +0000 (0:00:00.068) 0:00:24.037 ****** 2026-02-17 04:52:55.733971 | orchestrator | 2026-02-17 04:52:55.733982 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-17 04:52:55.733994 | orchestrator | Tuesday 17 February 2026 04:52:53 +0000 (0:00:00.074) 0:00:24.111 ****** 2026-02-17 04:52:55.734004 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-17 04:52:55.734015 | orchestrator | 2026-02-17 04:52:55.734088 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-17 04:52:55.734119 | orchestrator | Tuesday 17 February 2026 04:52:54 +0000 (0:00:01.534) 0:00:25.646 ****** 2026-02-17 04:52:55.734138 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-17 04:52:55.734156 | orchestrator |  "msg": [ 2026-02-17 04:52:55.734175 | orchestrator |  "Validator run completed.", 2026-02-17 04:52:55.734189 | orchestrator |  "You can find the report file here:", 2026-02-17 04:52:55.734200 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-17T04:52:30+00:00-report.json", 2026-02-17 04:52:55.734219 | orchestrator |  "on the following host:", 2026-02-17 04:52:55.734230 | orchestrator |  "testbed-manager" 2026-02-17 04:52:55.734241 | orchestrator |  ] 2026-02-17 04:52:55.734252 | orchestrator | } 2026-02-17 04:52:55.734263 | orchestrator | 2026-02-17 04:52:55.734274 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:52:55.734287 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 04:52:55.734300 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-17 04:52:55.734311 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-17 04:52:55.734322 | orchestrator | 2026-02-17 04:52:55.734333 | orchestrator | 2026-02-17 04:52:55.734344 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:52:55.734355 | orchestrator | Tuesday 17 February 2026 04:52:55 +0000 (0:00:00.604) 0:00:26.250 ****** 2026-02-17 04:52:55.734366 | orchestrator | =============================================================================== 2026-02-17 04:52:55.734377 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.59s 2026-02-17 04:52:55.734388 | orchestrator | Aggregate test results step one ----------------------------------------- 1.69s 2026-02-17 04:52:55.734399 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.55s 2026-02-17 04:52:55.734409 | orchestrator | Write report file ------------------------------------------------------- 1.53s 2026-02-17 04:52:55.734420 | orchestrator | Get timestamp for report file ------------------------------------------- 0.80s 2026-02-17 04:52:55.734431 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2026-02-17 04:52:55.734442 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.75s 2026-02-17 04:52:55.734452 | orchestrator | Create report output directory ------------------------------------------ 0.70s 2026-02-17 04:52:55.734463 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.66s 2026-02-17 04:52:55.734473 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2026-02-17 04:52:55.734526 | orchestrator | Print report file information ------------------------------------------- 0.60s 2026-02-17 04:52:55.734550 | orchestrator | Prepare test data ------------------------------------------------------- 0.57s 2026-02-17 04:52:55.734569 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.54s 2026-02-17 04:52:55.734580 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.53s 2026-02-17 04:52:55.734591 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.53s 2026-02-17 04:52:55.734602 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.51s 2026-02-17 04:52:55.734612 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.49s 2026-02-17 04:52:55.734623 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.49s 2026-02-17 04:52:55.734634 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.49s 2026-02-17 04:52:55.734644 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-02-17 04:52:56.032771 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-17 04:52:56.041061 | orchestrator | + set -e 2026-02-17 04:52:56.041136 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 04:52:56.042011 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 04:52:56.042087 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 04:52:56.042099 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 04:52:56.042110 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 04:52:56.042121 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 04:52:56.042133 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 04:52:56.042144 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 04:52:56.042156 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 04:52:56.042166 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 04:52:56.042177 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 04:52:56.042188 | orchestrator | ++ export ARA=false 2026-02-17 04:52:56.042199 | orchestrator | ++ ARA=false 2026-02-17 04:52:56.042210 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 04:52:56.042220 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 04:52:56.042231 | orchestrator | ++ export TEMPEST=false 2026-02-17 04:52:56.042242 | orchestrator | ++ TEMPEST=false 2026-02-17 04:52:56.042253 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 04:52:56.042263 | orchestrator | ++ IS_ZUUL=true 2026-02-17 04:52:56.042274 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 04:52:56.042285 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 04:52:56.042296 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 04:52:56.042307 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 04:52:56.042318 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 04:52:56.042329 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 04:52:56.042340 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 04:52:56.042351 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 04:52:56.042361 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 04:52:56.042372 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 04:52:56.042383 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-17 04:52:56.042393 | orchestrator | + source /etc/os-release 2026-02-17 04:52:56.042404 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-17 04:52:56.042415 | orchestrator | ++ NAME=Ubuntu 2026-02-17 04:52:56.042426 | orchestrator | ++ VERSION_ID=24.04 2026-02-17 04:52:56.042436 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-17 04:52:56.042447 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-17 04:52:56.042458 | orchestrator | ++ ID=ubuntu 2026-02-17 04:52:56.042469 | orchestrator | ++ ID_LIKE=debian 2026-02-17 04:52:56.042479 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-17 04:52:56.042515 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-17 04:52:56.042527 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-17 04:52:56.042538 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-17 04:52:56.042550 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-17 04:52:56.042561 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-17 04:52:56.042571 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-17 04:52:56.042583 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-17 04:52:56.042595 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-17 04:52:56.069920 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-17 04:53:18.700153 | orchestrator | 2026-02-17 04:53:18.700248 | orchestrator | # Status of Elasticsearch 2026-02-17 04:53:18.700259 | orchestrator | 2026-02-17 04:53:18.700266 | orchestrator | + pushd /opt/configuration/contrib 2026-02-17 04:53:18.700274 | orchestrator | + echo 2026-02-17 04:53:18.700281 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-17 04:53:18.700287 | orchestrator | + echo 2026-02-17 04:53:18.700294 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-17 04:53:18.889388 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-17 04:53:18.889485 | orchestrator | 2026-02-17 04:53:18.889499 | orchestrator | + echo 2026-02-17 04:53:18.889591 | orchestrator | + echo '# Status of MariaDB' 2026-02-17 04:53:18.889609 | orchestrator | # Status of MariaDB 2026-02-17 04:53:18.889648 | orchestrator | 2026-02-17 04:53:18.889660 | orchestrator | + echo 2026-02-17 04:53:18.890465 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-17 04:53:18.959752 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-17 04:53:18.959867 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-17 04:53:18.959891 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-17 04:53:18.959911 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-17 04:53:19.028906 | orchestrator | Reading package lists... 2026-02-17 04:53:19.370489 | orchestrator | Building dependency tree... 2026-02-17 04:53:19.370649 | orchestrator | Reading state information... 2026-02-17 04:53:19.718603 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-17 04:53:19.718747 | orchestrator | bc set to manually installed. 2026-02-17 04:53:19.718763 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-17 04:53:20.445556 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-17 04:53:20.446509 | orchestrator | 2026-02-17 04:53:20.446578 | orchestrator | # Status of Prometheus 2026-02-17 04:53:20.446592 | orchestrator | 2026-02-17 04:53:20.446602 | orchestrator | + echo 2026-02-17 04:53:20.446613 | orchestrator | + echo '# Status of Prometheus' 2026-02-17 04:53:20.446622 | orchestrator | + echo 2026-02-17 04:53:20.446632 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-17 04:53:20.525419 | orchestrator | Unauthorized 2026-02-17 04:53:20.531549 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-17 04:53:20.591966 | orchestrator | Unauthorized 2026-02-17 04:53:20.595098 | orchestrator | 2026-02-17 04:53:20.595125 | orchestrator | # Status of RabbitMQ 2026-02-17 04:53:20.595132 | orchestrator | 2026-02-17 04:53:20.595140 | orchestrator | + echo 2026-02-17 04:53:20.595146 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-17 04:53:20.595153 | orchestrator | + echo 2026-02-17 04:53:20.596106 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-17 04:53:20.654101 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-17 04:53:20.654173 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-17 04:53:20.654184 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-17 04:53:21.160894 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-17 04:53:21.170932 | orchestrator | 2026-02-17 04:53:21.171027 | orchestrator | # Status of Redis 2026-02-17 04:53:21.171051 | orchestrator | 2026-02-17 04:53:21.171067 | orchestrator | + echo 2026-02-17 04:53:21.171086 | orchestrator | + echo '# Status of Redis' 2026-02-17 04:53:21.171105 | orchestrator | + echo 2026-02-17 04:53:21.171126 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-17 04:53:21.175989 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001986s;;;0.000000;10.000000 2026-02-17 04:53:21.176309 | orchestrator | + popd 2026-02-17 04:53:21.176333 | orchestrator | 2026-02-17 04:53:21.176344 | orchestrator | + echo 2026-02-17 04:53:21.176477 | orchestrator | # Create backup of MariaDB database 2026-02-17 04:53:21.176496 | orchestrator | 2026-02-17 04:53:21.176508 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-17 04:53:21.176549 | orchestrator | + echo 2026-02-17 04:53:21.176561 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-17 04:53:23.289416 | orchestrator | 2026-02-17 04:53:23 | INFO  | Task 316a65ad-18c4-44d7-9c46-529e6f54cf6e (mariadb_backup) was prepared for execution. 2026-02-17 04:53:23.289580 | orchestrator | 2026-02-17 04:53:23 | INFO  | It takes a moment until task 316a65ad-18c4-44d7-9c46-529e6f54cf6e (mariadb_backup) has been started and output is visible here. 2026-02-17 04:54:41.469543 | orchestrator | 2026-02-17 04:54:41.469742 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 04:54:41.469775 | orchestrator | 2026-02-17 04:54:41.469796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 04:54:41.469816 | orchestrator | Tuesday 17 February 2026 04:53:27 +0000 (0:00:00.169) 0:00:00.169 ****** 2026-02-17 04:54:41.469836 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:54:41.469857 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:54:41.469877 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:54:41.469896 | orchestrator | 2026-02-17 04:54:41.469949 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 04:54:41.469968 | orchestrator | Tuesday 17 February 2026 04:53:27 +0000 (0:00:00.323) 0:00:00.492 ****** 2026-02-17 04:54:41.469979 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-17 04:54:41.469996 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-17 04:54:41.470095 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-17 04:54:41.470120 | orchestrator | 2026-02-17 04:54:41.470136 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-17 04:54:41.470154 | orchestrator | 2026-02-17 04:54:41.470171 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-17 04:54:41.470189 | orchestrator | Tuesday 17 February 2026 04:53:28 +0000 (0:00:00.576) 0:00:01.069 ****** 2026-02-17 04:54:41.470207 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 04:54:41.470227 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 04:54:41.470244 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 04:54:41.470261 | orchestrator | 2026-02-17 04:54:41.470280 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 04:54:41.470299 | orchestrator | Tuesday 17 February 2026 04:53:28 +0000 (0:00:00.417) 0:00:01.487 ****** 2026-02-17 04:54:41.470318 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 04:54:41.470340 | orchestrator | 2026-02-17 04:54:41.470354 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-17 04:54:41.470383 | orchestrator | Tuesday 17 February 2026 04:53:29 +0000 (0:00:00.542) 0:00:02.029 ****** 2026-02-17 04:54:41.470397 | orchestrator | ok: [testbed-node-0] 2026-02-17 04:54:41.470411 | orchestrator | ok: [testbed-node-1] 2026-02-17 04:54:41.470423 | orchestrator | ok: [testbed-node-2] 2026-02-17 04:54:41.470434 | orchestrator | 2026-02-17 04:54:41.470445 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-17 04:54:41.470456 | orchestrator | Tuesday 17 February 2026 04:53:32 +0000 (0:00:03.059) 0:00:05.088 ****** 2026-02-17 04:54:41.470467 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-17 04:54:41.470477 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-17 04:54:41.470489 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-17 04:54:41.470500 | orchestrator | mariadb_bootstrap_restart 2026-02-17 04:54:41.470511 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:54:41.470522 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:54:41.470533 | orchestrator | changed: [testbed-node-0] 2026-02-17 04:54:41.470544 | orchestrator | 2026-02-17 04:54:41.470555 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-17 04:54:41.470565 | orchestrator | skipping: no hosts matched 2026-02-17 04:54:41.470576 | orchestrator | 2026-02-17 04:54:41.470587 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-17 04:54:41.470598 | orchestrator | skipping: no hosts matched 2026-02-17 04:54:41.470640 | orchestrator | 2026-02-17 04:54:41.470651 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-17 04:54:41.470662 | orchestrator | skipping: no hosts matched 2026-02-17 04:54:41.470673 | orchestrator | 2026-02-17 04:54:41.470684 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-17 04:54:41.470695 | orchestrator | 2026-02-17 04:54:41.470706 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-17 04:54:41.470717 | orchestrator | Tuesday 17 February 2026 04:54:40 +0000 (0:01:08.110) 0:01:13.199 ****** 2026-02-17 04:54:41.470728 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:54:41.470739 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:54:41.470749 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:54:41.470760 | orchestrator | 2026-02-17 04:54:41.470771 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-17 04:54:41.470795 | orchestrator | Tuesday 17 February 2026 04:54:40 +0000 (0:00:00.287) 0:01:13.486 ****** 2026-02-17 04:54:41.470806 | orchestrator | skipping: [testbed-node-0] 2026-02-17 04:54:41.470817 | orchestrator | skipping: [testbed-node-1] 2026-02-17 04:54:41.470828 | orchestrator | skipping: [testbed-node-2] 2026-02-17 04:54:41.470839 | orchestrator | 2026-02-17 04:54:41.470850 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:54:41.470862 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 04:54:41.470874 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 04:54:41.470886 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 04:54:41.470897 | orchestrator | 2026-02-17 04:54:41.470908 | orchestrator | 2026-02-17 04:54:41.470918 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:54:41.470929 | orchestrator | Tuesday 17 February 2026 04:54:41 +0000 (0:00:00.396) 0:01:13.883 ****** 2026-02-17 04:54:41.470940 | orchestrator | =============================================================================== 2026-02-17 04:54:41.470951 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 68.11s 2026-02-17 04:54:41.470985 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.06s 2026-02-17 04:54:41.470997 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2026-02-17 04:54:41.471008 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2026-02-17 04:54:41.471019 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2026-02-17 04:54:41.471030 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2026-02-17 04:54:41.471041 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-17 04:54:41.471052 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2026-02-17 04:54:41.778739 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-17 04:54:41.786838 | orchestrator | + set -e 2026-02-17 04:54:41.787015 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 04:54:41.788576 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 04:54:41.788702 | orchestrator | ++ INTERACTIVE=false 2026-02-17 04:54:41.788719 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 04:54:41.788730 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 04:54:41.788742 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-17 04:54:41.789485 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-17 04:54:41.792599 | orchestrator | 2026-02-17 04:54:41.792729 | orchestrator | # OpenStack endpoints 2026-02-17 04:54:41.792744 | orchestrator | 2026-02-17 04:54:41.792754 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 04:54:41.792764 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 04:54:41.792775 | orchestrator | + export OS_CLOUD=admin 2026-02-17 04:54:41.792784 | orchestrator | + OS_CLOUD=admin 2026-02-17 04:54:41.792794 | orchestrator | + echo 2026-02-17 04:54:41.792804 | orchestrator | + echo '# OpenStack endpoints' 2026-02-17 04:54:41.792814 | orchestrator | + echo 2026-02-17 04:54:41.792824 | orchestrator | + openstack endpoint list 2026-02-17 04:54:44.823779 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-17 04:54:44.823878 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-17 04:54:44.823892 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-17 04:54:44.823926 | orchestrator | | 025e3b7f783b40b3b7155367efb3629f | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-17 04:54:44.823952 | orchestrator | | 039b528ce4964ad7960ee8167e7ae0b6 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-17 04:54:44.823963 | orchestrator | | 085f50f30c16426685a4ce3473f080b5 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-17 04:54:44.823973 | orchestrator | | 0b77307004d54f37841c38aada983b7c | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-17 04:54:44.823983 | orchestrator | | 0c4de214ae9c40d8990b3ac0f288da74 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-17 04:54:44.823993 | orchestrator | | 1badff4b6cd640a798a65013b367dc52 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-17 04:54:44.824003 | orchestrator | | 2114083bf5c7422e969a6dcf1b5d4e32 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-17 04:54:44.824013 | orchestrator | | 28a1dfd5f11f40ae8bffd405320976af | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-17 04:54:44.824023 | orchestrator | | 4182df8e39b34b1b8e1fb6ab8c91409d | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-17 04:54:44.824032 | orchestrator | | 484f7955d53f4b7691687140a0cc5776 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-17 04:54:44.824042 | orchestrator | | 54c4bcec2bef41fd88d23f3316d3600b | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-17 04:54:44.824052 | orchestrator | | 556b3c215b5f4d37b2289ded755af690 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-17 04:54:44.824061 | orchestrator | | 59eafb2907cd4b09a42d3f8a3d7b8300 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-17 04:54:44.824071 | orchestrator | | 5a3239f2dcc04c4fad2e3e2f57c098ba | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-17 04:54:44.824080 | orchestrator | | 6c82c785caa345d9a856640fd5324471 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-17 04:54:44.824090 | orchestrator | | 72e0325cf95e48d99898955e3f2b61df | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-17 04:54:44.824100 | orchestrator | | 89bea0e518654dc9a56909a904992971 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-17 04:54:44.824109 | orchestrator | | 8ce0e4e4b35c43b98b1134dbfc6c05da | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-17 04:54:44.824119 | orchestrator | | 963f7c937712400d81d8489cdc646bf0 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-17 04:54:44.824128 | orchestrator | | 96eaaf337c364a9b82373eced3bdfb89 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-17 04:54:44.824155 | orchestrator | | 9f8889a251c44a62bf615f2db07fdebe | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-17 04:54:44.824172 | orchestrator | | a06e34df66274f70a301e90dacf51f01 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-17 04:54:44.824187 | orchestrator | | a70faf54f12545ce931e611c1d9a1fe5 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-17 04:54:44.824197 | orchestrator | | b0d3c2204fbc49158daf79629a63ebf0 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-17 04:54:44.824207 | orchestrator | | b2fec9c03f5d49aa9a336027a8ff79da | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-17 04:54:44.824216 | orchestrator | | b30b4d17618344f5a3e57b73ae100843 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-17 04:54:44.824226 | orchestrator | | d40a921ef8b44c65aa2d613d7bad1d20 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-17 04:54:44.824236 | orchestrator | | d5738f97f3294801bc914a4fd49c8cde | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-17 04:54:44.824245 | orchestrator | | dc00423c4c6f4748a9aff0a997c60aaa | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-17 04:54:44.824255 | orchestrator | | fe895954a8a84098b629bfaaa4c0bb2f | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-17 04:54:44.824265 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-17 04:54:45.065253 | orchestrator | 2026-02-17 04:54:45.065366 | orchestrator | # Cinder 2026-02-17 04:54:45.065383 | orchestrator | 2026-02-17 04:54:45.065395 | orchestrator | + echo 2026-02-17 04:54:45.065407 | orchestrator | + echo '# Cinder' 2026-02-17 04:54:45.065418 | orchestrator | + echo 2026-02-17 04:54:45.065430 | orchestrator | + openstack volume service list 2026-02-17 04:54:47.665302 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-17 04:54:47.665378 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-17 04:54:47.665383 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-17 04:54:47.665388 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-17T04:54:44.000000 | 2026-02-17 04:54:47.665392 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-17T04:54:44.000000 | 2026-02-17 04:54:47.665396 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-17T04:54:44.000000 | 2026-02-17 04:54:47.665400 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-17T04:54:43.000000 | 2026-02-17 04:54:47.665404 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-17T04:54:41.000000 | 2026-02-17 04:54:47.665408 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-17T04:54:42.000000 | 2026-02-17 04:54:47.665412 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-17T04:54:44.000000 | 2026-02-17 04:54:47.665416 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-17T04:54:46.000000 | 2026-02-17 04:54:47.665420 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-17T04:54:46.000000 | 2026-02-17 04:54:47.665440 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-17 04:54:47.928597 | orchestrator | 2026-02-17 04:54:47.928738 | orchestrator | # Neutron 2026-02-17 04:54:47.928752 | orchestrator | 2026-02-17 04:54:47.928763 | orchestrator | + echo 2026-02-17 04:54:47.928775 | orchestrator | + echo '# Neutron' 2026-02-17 04:54:47.928788 | orchestrator | + echo 2026-02-17 04:54:47.928799 | orchestrator | + openstack network agent list 2026-02-17 04:54:51.111145 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-17 04:54:51.111244 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-17 04:54:51.111260 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-17 04:54:51.111272 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-17 04:54:51.111283 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-17 04:54:51.111294 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-17 04:54:51.111305 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-17 04:54:51.111336 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-17 04:54:51.111348 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-17 04:54:51.111359 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-17 04:54:51.111370 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-17 04:54:51.111381 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-17 04:54:51.111392 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-17 04:54:51.374786 | orchestrator | + openstack network service provider list 2026-02-17 04:54:53.951063 | orchestrator | +---------------+------+---------+ 2026-02-17 04:54:53.951168 | orchestrator | | Service Type | Name | Default | 2026-02-17 04:54:53.951182 | orchestrator | +---------------+------+---------+ 2026-02-17 04:54:53.951194 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-17 04:54:53.951205 | orchestrator | +---------------+------+---------+ 2026-02-17 04:54:54.262363 | orchestrator | 2026-02-17 04:54:54.262458 | orchestrator | # Nova 2026-02-17 04:54:54.262473 | orchestrator | 2026-02-17 04:54:54.262485 | orchestrator | + echo 2026-02-17 04:54:54.262496 | orchestrator | + echo '# Nova' 2026-02-17 04:54:54.262507 | orchestrator | + echo 2026-02-17 04:54:54.262518 | orchestrator | + openstack compute service list 2026-02-17 04:54:57.415072 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-17 04:54:57.415168 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-17 04:54:57.415185 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-17 04:54:57.415199 | orchestrator | | 3ad0c739-9b5d-408d-8378-a3b9f055f245 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-17T04:54:53.000000 | 2026-02-17 04:54:57.415244 | orchestrator | | c1e0b537-9c07-48f5-9d64-7207b2275680 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-17T04:54:49.000000 | 2026-02-17 04:54:57.415258 | orchestrator | | 466bf15a-d02e-49dc-929e-8137870dc838 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-17T04:54:50.000000 | 2026-02-17 04:54:57.415266 | orchestrator | | d917e830-632b-4ac1-b68a-a76a305c6a6a | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-17T04:54:54.000000 | 2026-02-17 04:54:57.415274 | orchestrator | | 27100e99-a315-4894-9435-6c63d53aa600 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-17T04:54:56.000000 | 2026-02-17 04:54:57.415282 | orchestrator | | dde8eb98-3019-4639-91d8-43ade53192a9 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-17T04:54:47.000000 | 2026-02-17 04:54:57.415290 | orchestrator | | 73dfab74-6563-4b72-9a9a-570b252516ac | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-17T04:54:53.000000 | 2026-02-17 04:54:57.415298 | orchestrator | | 7a201016-4bf6-4e3e-89a4-1fea629af0ad | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-17T04:54:54.000000 | 2026-02-17 04:54:57.415306 | orchestrator | | 6cea8419-16dc-46b2-8a94-290362ee95a4 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-17T04:54:54.000000 | 2026-02-17 04:54:57.415314 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-17 04:54:57.693760 | orchestrator | + openstack hypervisor list 2026-02-17 04:55:00.321464 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-17 04:55:00.321550 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-17 04:55:00.321560 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-17 04:55:00.321567 | orchestrator | | 12ebbf25-5d7a-4144-9519-bc3152ca0cab | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-17 04:55:00.321574 | orchestrator | | 57bb7809-bec8-4f5f-832a-0fd87d933eaa | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-17 04:55:00.321581 | orchestrator | | 5ed33847-6261-45ef-a22f-57188535da14 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-17 04:55:00.321589 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-17 04:55:00.567982 | orchestrator | 2026-02-17 04:55:00.568074 | orchestrator | # Run OpenStack test play 2026-02-17 04:55:00.568088 | orchestrator | 2026-02-17 04:55:00.568105 | orchestrator | + echo 2026-02-17 04:55:00.568118 | orchestrator | + echo '# Run OpenStack test play' 2026-02-17 04:55:00.568130 | orchestrator | + echo 2026-02-17 04:55:00.568142 | orchestrator | + osism apply --environment openstack test 2026-02-17 04:55:02.552580 | orchestrator | 2026-02-17 04:55:02 | INFO  | Trying to run play test in environment openstack 2026-02-17 04:55:12.687874 | orchestrator | 2026-02-17 04:55:12 | INFO  | Task c59c7f97-8b8f-46f4-a537-acba83153808 (test) was prepared for execution. 2026-02-17 04:55:12.687998 | orchestrator | 2026-02-17 04:55:12 | INFO  | It takes a moment until task c59c7f97-8b8f-46f4-a537-acba83153808 (test) has been started and output is visible here. 2026-02-17 04:57:45.430628 | orchestrator | 2026-02-17 04:57:45.430713 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-17 04:57:45.430722 | orchestrator | 2026-02-17 04:57:45.430729 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-17 04:57:45.430737 | orchestrator | Tuesday 17 February 2026 04:55:16 +0000 (0:00:00.069) 0:00:00.069 ****** 2026-02-17 04:57:45.430743 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.430751 | orchestrator | 2026-02-17 04:57:45.430758 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-17 04:57:45.430765 | orchestrator | Tuesday 17 February 2026 04:55:20 +0000 (0:00:03.763) 0:00:03.833 ****** 2026-02-17 04:57:45.430771 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.430777 | orchestrator | 2026-02-17 04:57:45.430801 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-17 04:57:45.430808 | orchestrator | Tuesday 17 February 2026 04:55:24 +0000 (0:00:04.138) 0:00:07.972 ****** 2026-02-17 04:57:45.430864 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.430872 | orchestrator | 2026-02-17 04:57:45.430878 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-17 04:57:45.430885 | orchestrator | Tuesday 17 February 2026 04:55:31 +0000 (0:00:06.410) 0:00:14.383 ****** 2026-02-17 04:57:45.430891 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.430897 | orchestrator | 2026-02-17 04:57:45.430904 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-17 04:57:45.430910 | orchestrator | Tuesday 17 February 2026 04:55:35 +0000 (0:00:04.158) 0:00:18.541 ****** 2026-02-17 04:57:45.430916 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.430922 | orchestrator | 2026-02-17 04:57:45.430929 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-17 04:57:45.430935 | orchestrator | Tuesday 17 February 2026 04:55:39 +0000 (0:00:04.091) 0:00:22.632 ****** 2026-02-17 04:57:45.430942 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-17 04:57:45.430949 | orchestrator | changed: [localhost] => (item=member) 2026-02-17 04:57:45.430956 | orchestrator | changed: [localhost] => (item=creator) 2026-02-17 04:57:45.430962 | orchestrator | 2026-02-17 04:57:45.430968 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-17 04:57:45.430975 | orchestrator | Tuesday 17 February 2026 04:55:50 +0000 (0:00:11.368) 0:00:34.001 ****** 2026-02-17 04:57:45.430981 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.430987 | orchestrator | 2026-02-17 04:57:45.430993 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-17 04:57:45.430999 | orchestrator | Tuesday 17 February 2026 04:55:54 +0000 (0:00:04.193) 0:00:38.195 ****** 2026-02-17 04:57:45.431006 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.431012 | orchestrator | 2026-02-17 04:57:45.431018 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-17 04:57:45.431024 | orchestrator | Tuesday 17 February 2026 04:55:59 +0000 (0:00:04.687) 0:00:42.883 ****** 2026-02-17 04:57:45.431030 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.431036 | orchestrator | 2026-02-17 04:57:45.431043 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-17 04:57:45.431049 | orchestrator | Tuesday 17 February 2026 04:56:03 +0000 (0:00:04.127) 0:00:47.011 ****** 2026-02-17 04:57:45.431055 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.431061 | orchestrator | 2026-02-17 04:57:45.431068 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-17 04:57:45.431074 | orchestrator | Tuesday 17 February 2026 04:56:07 +0000 (0:00:03.819) 0:00:50.830 ****** 2026-02-17 04:57:45.431080 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.431086 | orchestrator | 2026-02-17 04:57:45.431092 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-17 04:57:45.431099 | orchestrator | Tuesday 17 February 2026 04:56:11 +0000 (0:00:04.009) 0:00:54.839 ****** 2026-02-17 04:57:45.431105 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.431142 | orchestrator | 2026-02-17 04:57:45.431148 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-17 04:57:45.431155 | orchestrator | Tuesday 17 February 2026 04:56:15 +0000 (0:00:03.763) 0:00:58.603 ****** 2026-02-17 04:57:45.431161 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.431167 | orchestrator | 2026-02-17 04:57:45.431174 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-17 04:57:45.431180 | orchestrator | Tuesday 17 February 2026 04:56:19 +0000 (0:00:04.570) 0:01:03.173 ****** 2026-02-17 04:57:45.431187 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.431193 | orchestrator | 2026-02-17 04:57:45.431199 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-17 04:57:45.431205 | orchestrator | Tuesday 17 February 2026 04:56:25 +0000 (0:00:05.411) 0:01:08.585 ****** 2026-02-17 04:57:45.431218 | orchestrator | changed: [localhost] 2026-02-17 04:57:45.431224 | orchestrator | 2026-02-17 04:57:45.431230 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-17 04:57:45.431237 | orchestrator | 2026-02-17 04:57:45.431243 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-17 04:57:45.431249 | orchestrator | Tuesday 17 February 2026 04:56:35 +0000 (0:00:10.656) 0:01:19.242 ****** 2026-02-17 04:57:45.431255 | orchestrator | ok: [localhost] 2026-02-17 04:57:45.431262 | orchestrator | 2026-02-17 04:57:45.431268 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-17 04:57:45.431274 | orchestrator | Tuesday 17 February 2026 04:56:39 +0000 (0:00:03.775) 0:01:23.018 ****** 2026-02-17 04:57:45.431281 | orchestrator | skipping: [localhost] 2026-02-17 04:57:45.431287 | orchestrator | 2026-02-17 04:57:45.431293 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-17 04:57:45.431299 | orchestrator | Tuesday 17 February 2026 04:56:39 +0000 (0:00:00.059) 0:01:23.078 ****** 2026-02-17 04:57:45.431306 | orchestrator | skipping: [localhost] 2026-02-17 04:57:45.431312 | orchestrator | 2026-02-17 04:57:45.431318 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-17 04:57:45.431324 | orchestrator | Tuesday 17 February 2026 04:56:39 +0000 (0:00:00.045) 0:01:23.123 ****** 2026-02-17 04:57:45.431342 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-17 04:57:45.431349 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-17 04:57:45.431369 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-17 04:57:45.431375 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-17 04:57:45.431381 | orchestrator | skipping: [localhost] => (item=test)  2026-02-17 04:57:45.431388 | orchestrator | skipping: [localhost] 2026-02-17 04:57:45.431394 | orchestrator | 2026-02-17 04:57:45.431400 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-17 04:57:45.431407 | orchestrator | Tuesday 17 February 2026 04:56:40 +0000 (0:00:00.157) 0:01:23.280 ****** 2026-02-17 04:57:45.431413 | orchestrator | skipping: [localhost] 2026-02-17 04:57:45.431419 | orchestrator | 2026-02-17 04:57:45.431425 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-17 04:57:45.431432 | orchestrator | Tuesday 17 February 2026 04:56:40 +0000 (0:00:00.159) 0:01:23.440 ****** 2026-02-17 04:57:45.431438 | orchestrator | changed: [localhost] => (item=test) 2026-02-17 04:57:45.431444 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-17 04:57:45.431450 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-17 04:57:45.431457 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-17 04:57:45.431463 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-17 04:57:45.431469 | orchestrator | 2026-02-17 04:57:45.431475 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-17 04:57:45.431482 | orchestrator | Tuesday 17 February 2026 04:56:44 +0000 (0:00:04.775) 0:01:28.215 ****** 2026-02-17 04:57:45.431488 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-17 04:57:45.431495 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-17 04:57:45.431501 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-17 04:57:45.431508 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-17 04:57:45.431516 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j210916998200.3708', 'results_file': '/ansible/.ansible_async/j210916998200.3708', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-17 04:57:45.431525 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j964567005643.3733', 'results_file': '/ansible/.ansible_async/j964567005643.3733', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-17 04:57:45.431536 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j409840728340.3758', 'results_file': '/ansible/.ansible_async/j409840728340.3758', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-17 04:57:45.431542 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j345129408813.3783', 'results_file': '/ansible/.ansible_async/j345129408813.3783', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-17 04:57:45.431549 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j750357645159.3808', 'results_file': '/ansible/.ansible_async/j750357645159.3808', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-17 04:57:45.431555 | orchestrator | 2026-02-17 04:57:45.431561 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-17 04:57:45.431568 | orchestrator | Tuesday 17 February 2026 04:57:31 +0000 (0:00:46.674) 0:02:14.890 ****** 2026-02-17 04:57:45.431574 | orchestrator | changed: [localhost] => (item=test) 2026-02-17 04:57:45.431580 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-17 04:57:45.431587 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-17 04:57:45.431593 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-17 04:57:45.431599 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-17 04:57:45.431605 | orchestrator | 2026-02-17 04:57:45.431612 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-17 04:57:45.431618 | orchestrator | Tuesday 17 February 2026 04:57:36 +0000 (0:00:04.516) 0:02:19.407 ****** 2026-02-17 04:57:45.431624 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-17 04:57:45.431631 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j516241751925.3912', 'results_file': '/ansible/.ansible_async/j516241751925.3912', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-17 04:57:45.431638 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j667407019259.3937', 'results_file': '/ansible/.ansible_async/j667407019259.3937', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-17 04:57:45.431644 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j623558975774.3962', 'results_file': '/ansible/.ansible_async/j623558975774.3962', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-17 04:57:45.431660 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j745278093109.3987', 'results_file': '/ansible/.ansible_async/j745278093109.3987', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-17 04:58:25.324013 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j417190967057.4012', 'results_file': '/ansible/.ansible_async/j417190967057.4012', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-17 04:58:25.324153 | orchestrator | 2026-02-17 04:58:25.324182 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-17 04:58:25.324206 | orchestrator | Tuesday 17 February 2026 04:57:45 +0000 (0:00:09.280) 0:02:28.687 ****** 2026-02-17 04:58:25.324224 | orchestrator | changed: [localhost] => (item=test) 2026-02-17 04:58:25.324245 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-17 04:58:25.324264 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-17 04:58:25.324282 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-17 04:58:25.324301 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-17 04:58:25.324320 | orchestrator | 2026-02-17 04:58:25.324339 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-17 04:58:25.324359 | orchestrator | Tuesday 17 February 2026 04:57:50 +0000 (0:00:04.613) 0:02:33.301 ****** 2026-02-17 04:58:25.324406 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-17 04:58:25.324428 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j912692525576.4081', 'results_file': '/ansible/.ansible_async/j912692525576.4081', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-17 04:58:25.324447 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j449245872252.4106', 'results_file': '/ansible/.ansible_async/j449245872252.4106', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-17 04:58:25.324465 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j768034624466.4132', 'results_file': '/ansible/.ansible_async/j768034624466.4132', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-17 04:58:25.324485 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j744933279526.4158', 'results_file': '/ansible/.ansible_async/j744933279526.4158', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-17 04:58:25.324505 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j615760022157.4184', 'results_file': '/ansible/.ansible_async/j615760022157.4184', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-17 04:58:25.324523 | orchestrator | 2026-02-17 04:58:25.324537 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-17 04:58:25.324549 | orchestrator | Tuesday 17 February 2026 04:58:00 +0000 (0:00:10.210) 0:02:43.511 ****** 2026-02-17 04:58:25.324562 | orchestrator | changed: [localhost] 2026-02-17 04:58:25.324575 | orchestrator | 2026-02-17 04:58:25.324588 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-17 04:58:25.324605 | orchestrator | Tuesday 17 February 2026 04:58:06 +0000 (0:00:06.496) 0:02:50.007 ****** 2026-02-17 04:58:25.324624 | orchestrator | changed: [localhost] 2026-02-17 04:58:25.324642 | orchestrator | 2026-02-17 04:58:25.324661 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-17 04:58:25.324679 | orchestrator | Tuesday 17 February 2026 04:58:20 +0000 (0:00:13.310) 0:03:03.318 ****** 2026-02-17 04:58:25.324700 | orchestrator | ok: [localhost] 2026-02-17 04:58:25.324719 | orchestrator | 2026-02-17 04:58:25.324739 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-17 04:58:25.324758 | orchestrator | Tuesday 17 February 2026 04:58:24 +0000 (0:00:04.952) 0:03:08.270 ****** 2026-02-17 04:58:25.324777 | orchestrator | ok: [localhost] => { 2026-02-17 04:58:25.324796 | orchestrator |  "msg": "192.168.112.113" 2026-02-17 04:58:25.324815 | orchestrator | } 2026-02-17 04:58:25.324834 | orchestrator | 2026-02-17 04:58:25.324853 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 04:58:25.324915 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 04:58:25.324938 | orchestrator | 2026-02-17 04:58:25.324955 | orchestrator | 2026-02-17 04:58:25.324975 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 04:58:25.324993 | orchestrator | Tuesday 17 February 2026 04:58:25 +0000 (0:00:00.047) 0:03:08.318 ****** 2026-02-17 04:58:25.325011 | orchestrator | =============================================================================== 2026-02-17 04:58:25.325029 | orchestrator | Wait for instance creation to complete --------------------------------- 46.67s 2026-02-17 04:58:25.325047 | orchestrator | Attach test volume ----------------------------------------------------- 13.31s 2026-02-17 04:58:25.325066 | orchestrator | Add member roles to user test ------------------------------------------ 11.37s 2026-02-17 04:58:25.325084 | orchestrator | Create test router ----------------------------------------------------- 10.66s 2026-02-17 04:58:25.325137 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.21s 2026-02-17 04:58:25.325157 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.28s 2026-02-17 04:58:25.325173 | orchestrator | Create test volume ------------------------------------------------------ 6.50s 2026-02-17 04:58:25.325220 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.41s 2026-02-17 04:58:25.325240 | orchestrator | Create test subnet ------------------------------------------------------ 5.41s 2026-02-17 04:58:25.325258 | orchestrator | Create floating ip address ---------------------------------------------- 4.95s 2026-02-17 04:58:25.325276 | orchestrator | Create test instances --------------------------------------------------- 4.78s 2026-02-17 04:58:25.325294 | orchestrator | Create ssh security group ----------------------------------------------- 4.69s 2026-02-17 04:58:25.325312 | orchestrator | Add tag to instances ---------------------------------------------------- 4.61s 2026-02-17 04:58:25.325330 | orchestrator | Create test network ----------------------------------------------------- 4.57s 2026-02-17 04:58:25.325347 | orchestrator | Add metadata to instances ----------------------------------------------- 4.52s 2026-02-17 04:58:25.325364 | orchestrator | Create test server group ------------------------------------------------ 4.19s 2026-02-17 04:58:25.325384 | orchestrator | Create test project ----------------------------------------------------- 4.16s 2026-02-17 04:58:25.325402 | orchestrator | Create test-admin user -------------------------------------------------- 4.14s 2026-02-17 04:58:25.325420 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.13s 2026-02-17 04:58:25.325440 | orchestrator | Create test user -------------------------------------------------------- 4.09s 2026-02-17 04:58:25.642988 | orchestrator | + server_list 2026-02-17 04:58:25.643110 | orchestrator | + openstack --os-cloud test server list 2026-02-17 04:58:29.205978 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-17 04:58:29.206158 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-17 04:58:29.206174 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-17 04:58:29.206185 | orchestrator | | 0addbfd9-2657-4638-abb6-da06b9b1b05d | test-4 | ACTIVE | test=192.168.112.145, 192.168.200.171 | N/A (booted from volume) | SCS-1L-1 | 2026-02-17 04:58:29.206196 | orchestrator | | 631c0611-2423-45c0-8cd9-f0b74f22522d | test-3 | ACTIVE | test=192.168.112.115, 192.168.200.254 | N/A (booted from volume) | SCS-1L-1 | 2026-02-17 04:58:29.206207 | orchestrator | | dc54d4a3-95a4-4749-8524-27b4604fbb09 | test-2 | ACTIVE | test=192.168.112.143, 192.168.200.80 | N/A (booted from volume) | SCS-1L-1 | 2026-02-17 04:58:29.206218 | orchestrator | | 449aedef-420f-43cf-bd98-0e7d6027882c | test | ACTIVE | test=192.168.112.113, 192.168.200.32 | N/A (booted from volume) | SCS-1L-1 | 2026-02-17 04:58:29.206228 | orchestrator | | 79e3ac5c-06a7-4e7e-95bc-6b4a7a3443b6 | test-1 | ACTIVE | test=192.168.112.106, 192.168.200.140 | N/A (booted from volume) | SCS-1L-1 | 2026-02-17 04:58:29.206239 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-17 04:58:29.476169 | orchestrator | + openstack --os-cloud test server show test 2026-02-17 04:58:32.669026 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:32.669142 | orchestrator | | Field | Value | 2026-02-17 04:58:32.669177 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:32.669196 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-17 04:58:32.669208 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-17 04:58:32.669219 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-17 04:58:32.669230 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-17 04:58:32.669241 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-17 04:58:32.669253 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-17 04:58:32.669281 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-17 04:58:32.669293 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-17 04:58:32.669312 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-17 04:58:32.669323 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-17 04:58:32.669339 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-17 04:58:32.669350 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-17 04:58:32.669362 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-17 04:58:32.669373 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-17 04:58:32.669384 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-17 04:58:32.669401 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-17T04:57:16.000000 | 2026-02-17 04:58:32.669459 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-17 04:58:32.669487 | orchestrator | | accessIPv4 | | 2026-02-17 04:58:32.669499 | orchestrator | | accessIPv6 | | 2026-02-17 04:58:32.669511 | orchestrator | | addresses | test=192.168.112.113, 192.168.200.32 | 2026-02-17 04:58:32.669528 | orchestrator | | config_drive | | 2026-02-17 04:58:32.669539 | orchestrator | | created | 2026-02-17T04:56:49Z | 2026-02-17 04:58:32.669550 | orchestrator | | description | None | 2026-02-17 04:58:32.669561 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-17 04:58:32.669572 | orchestrator | | hostId | a84c601e7241637e634c2aa94b82df0f385f8e548913fc80d9f3c509 | 2026-02-17 04:58:32.669583 | orchestrator | | host_status | None | 2026-02-17 04:58:32.669602 | orchestrator | | id | 449aedef-420f-43cf-bd98-0e7d6027882c | 2026-02-17 04:58:32.669623 | orchestrator | | image | N/A (booted from volume) | 2026-02-17 04:58:32.669635 | orchestrator | | key_name | test | 2026-02-17 04:58:32.669646 | orchestrator | | locked | False | 2026-02-17 04:58:32.669657 | orchestrator | | locked_reason | None | 2026-02-17 04:58:32.669668 | orchestrator | | name | test | 2026-02-17 04:58:32.669680 | orchestrator | | pinned_availability_zone | None | 2026-02-17 04:58:32.669691 | orchestrator | | progress | 0 | 2026-02-17 04:58:32.669702 | orchestrator | | project_id | 780d5413cc48488faeb2ea4bad49b534 | 2026-02-17 04:58:32.669713 | orchestrator | | properties | hostname='test' | 2026-02-17 04:58:32.669743 | orchestrator | | security_groups | name='icmp' | 2026-02-17 04:58:32.669755 | orchestrator | | | name='ssh' | 2026-02-17 04:58:32.669766 | orchestrator | | server_groups | None | 2026-02-17 04:58:32.669777 | orchestrator | | status | ACTIVE | 2026-02-17 04:58:32.669797 | orchestrator | | tags | test | 2026-02-17 04:58:32.669809 | orchestrator | | trusted_image_certificates | None | 2026-02-17 04:58:32.669820 | orchestrator | | updated | 2026-02-17T04:57:37Z | 2026-02-17 04:58:32.669831 | orchestrator | | user_id | ac83cd3fa01f4e17931d753a05c96418 | 2026-02-17 04:58:32.669842 | orchestrator | | volumes_attached | delete_on_termination='True', id='214e83fa-27de-4a47-836e-648998f37ba6' | 2026-02-17 04:58:32.669860 | orchestrator | | | delete_on_termination='False', id='3e15b2a3-f57a-43f5-9f82-4d0ef3a2623f' | 2026-02-17 04:58:32.673327 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:32.923566 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-17 04:58:35.971724 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:35.971830 | orchestrator | | Field | Value | 2026-02-17 04:58:35.971847 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:35.971876 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-17 04:58:35.971947 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-17 04:58:35.971962 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-17 04:58:35.971973 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-17 04:58:35.972015 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-17 04:58:35.972034 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-17 04:58:35.972077 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-17 04:58:35.972097 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-17 04:58:35.972148 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-17 04:58:35.972179 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-17 04:58:35.972199 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-17 04:58:35.972210 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-17 04:58:35.972221 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-17 04:58:35.972242 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-17 04:58:35.972255 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-17 04:58:35.972268 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-17T04:57:15.000000 | 2026-02-17 04:58:35.972291 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-17 04:58:35.972304 | orchestrator | | accessIPv4 | | 2026-02-17 04:58:35.972317 | orchestrator | | accessIPv6 | | 2026-02-17 04:58:35.972335 | orchestrator | | addresses | test=192.168.112.106, 192.168.200.140 | 2026-02-17 04:58:35.972349 | orchestrator | | config_drive | | 2026-02-17 04:58:35.972362 | orchestrator | | created | 2026-02-17T04:56:49Z | 2026-02-17 04:58:35.972374 | orchestrator | | description | None | 2026-02-17 04:58:35.972394 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-17 04:58:35.972407 | orchestrator | | hostId | a84c601e7241637e634c2aa94b82df0f385f8e548913fc80d9f3c509 | 2026-02-17 04:58:35.972419 | orchestrator | | host_status | None | 2026-02-17 04:58:35.972437 | orchestrator | | id | 79e3ac5c-06a7-4e7e-95bc-6b4a7a3443b6 | 2026-02-17 04:58:35.972449 | orchestrator | | image | N/A (booted from volume) | 2026-02-17 04:58:35.972460 | orchestrator | | key_name | test | 2026-02-17 04:58:35.972476 | orchestrator | | locked | False | 2026-02-17 04:58:35.972488 | orchestrator | | locked_reason | None | 2026-02-17 04:58:35.972499 | orchestrator | | name | test-1 | 2026-02-17 04:58:35.972516 | orchestrator | | pinned_availability_zone | None | 2026-02-17 04:58:35.972528 | orchestrator | | progress | 0 | 2026-02-17 04:58:35.972539 | orchestrator | | project_id | 780d5413cc48488faeb2ea4bad49b534 | 2026-02-17 04:58:35.972550 | orchestrator | | properties | hostname='test-1' | 2026-02-17 04:58:35.972568 | orchestrator | | security_groups | name='icmp' | 2026-02-17 04:58:35.972580 | orchestrator | | | name='ssh' | 2026-02-17 04:58:35.972591 | orchestrator | | server_groups | None | 2026-02-17 04:58:35.972603 | orchestrator | | status | ACTIVE | 2026-02-17 04:58:35.972614 | orchestrator | | tags | test | 2026-02-17 04:58:35.972632 | orchestrator | | trusted_image_certificates | None | 2026-02-17 04:58:35.972643 | orchestrator | | updated | 2026-02-17T04:57:37Z | 2026-02-17 04:58:35.972654 | orchestrator | | user_id | ac83cd3fa01f4e17931d753a05c96418 | 2026-02-17 04:58:35.972665 | orchestrator | | volumes_attached | delete_on_termination='True', id='ffdb1890-7c82-4831-b133-cd2063ef7d37' | 2026-02-17 04:58:35.974071 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:36.215568 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-17 04:58:39.218000 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:39.218120 | orchestrator | | Field | Value | 2026-02-17 04:58:39.218143 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:39.218151 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-17 04:58:39.218171 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-17 04:58:39.218177 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-17 04:58:39.218182 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-17 04:58:39.218187 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-17 04:58:39.218192 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-17 04:58:39.218208 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-17 04:58:39.218214 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-17 04:58:39.218219 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-17 04:58:39.218224 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-17 04:58:39.218232 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-17 04:58:39.218241 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-17 04:58:39.218246 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-17 04:58:39.218251 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-17 04:58:39.218256 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-17 04:58:39.218261 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-17T04:57:17.000000 | 2026-02-17 04:58:39.218269 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-17 04:58:39.218274 | orchestrator | | accessIPv4 | | 2026-02-17 04:58:39.218279 | orchestrator | | accessIPv6 | | 2026-02-17 04:58:39.218284 | orchestrator | | addresses | test=192.168.112.143, 192.168.200.80 | 2026-02-17 04:58:39.218296 | orchestrator | | config_drive | | 2026-02-17 04:58:39.218301 | orchestrator | | created | 2026-02-17T04:56:50Z | 2026-02-17 04:58:39.218306 | orchestrator | | description | None | 2026-02-17 04:58:39.218311 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-17 04:58:39.218316 | orchestrator | | hostId | a08ea271eff16ecaf38be15256732f1042b83e209dbf4c3d617186a4 | 2026-02-17 04:58:39.218321 | orchestrator | | host_status | None | 2026-02-17 04:58:39.218329 | orchestrator | | id | dc54d4a3-95a4-4749-8524-27b4604fbb09 | 2026-02-17 04:58:39.218334 | orchestrator | | image | N/A (booted from volume) | 2026-02-17 04:58:39.218339 | orchestrator | | key_name | test | 2026-02-17 04:58:39.218349 | orchestrator | | locked | False | 2026-02-17 04:58:39.218356 | orchestrator | | locked_reason | None | 2026-02-17 04:58:39.218361 | orchestrator | | name | test-2 | 2026-02-17 04:58:39.218366 | orchestrator | | pinned_availability_zone | None | 2026-02-17 04:58:39.218371 | orchestrator | | progress | 0 | 2026-02-17 04:58:39.218376 | orchestrator | | project_id | 780d5413cc48488faeb2ea4bad49b534 | 2026-02-17 04:58:39.218381 | orchestrator | | properties | hostname='test-2' | 2026-02-17 04:58:39.218390 | orchestrator | | security_groups | name='icmp' | 2026-02-17 04:58:39.218395 | orchestrator | | | name='ssh' | 2026-02-17 04:58:39.218403 | orchestrator | | server_groups | None | 2026-02-17 04:58:39.218411 | orchestrator | | status | ACTIVE | 2026-02-17 04:58:39.218416 | orchestrator | | tags | test | 2026-02-17 04:58:39.218421 | orchestrator | | trusted_image_certificates | None | 2026-02-17 04:58:39.218425 | orchestrator | | updated | 2026-02-17T04:57:38Z | 2026-02-17 04:58:39.218430 | orchestrator | | user_id | ac83cd3fa01f4e17931d753a05c96418 | 2026-02-17 04:58:39.218435 | orchestrator | | volumes_attached | delete_on_termination='True', id='fd30cb2c-ba7f-4caf-bdeb-0db16b90d868' | 2026-02-17 04:58:39.220135 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:39.477666 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-17 04:58:42.379975 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:42.380105 | orchestrator | | Field | Value | 2026-02-17 04:58:42.380130 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:42.380152 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-17 04:58:42.380161 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-17 04:58:42.380169 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-17 04:58:42.380178 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-17 04:58:42.380187 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-17 04:58:42.380195 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-17 04:58:42.380220 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-17 04:58:42.380229 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-17 04:58:42.380245 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-17 04:58:42.380253 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-17 04:58:42.380262 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-17 04:58:42.380270 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-17 04:58:42.380279 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-17 04:58:42.380287 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-17 04:58:42.380296 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-17 04:58:42.380304 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-17T04:57:16.000000 | 2026-02-17 04:58:42.380319 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-17 04:58:42.380333 | orchestrator | | accessIPv4 | | 2026-02-17 04:58:42.380342 | orchestrator | | accessIPv6 | | 2026-02-17 04:58:42.380350 | orchestrator | | addresses | test=192.168.112.115, 192.168.200.254 | 2026-02-17 04:58:42.380695 | orchestrator | | config_drive | | 2026-02-17 04:58:42.380708 | orchestrator | | created | 2026-02-17T04:56:51Z | 2026-02-17 04:58:42.380718 | orchestrator | | description | None | 2026-02-17 04:58:42.380727 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-17 04:58:42.380737 | orchestrator | | hostId | a08ea271eff16ecaf38be15256732f1042b83e209dbf4c3d617186a4 | 2026-02-17 04:58:42.380747 | orchestrator | | host_status | None | 2026-02-17 04:58:42.380770 | orchestrator | | id | 631c0611-2423-45c0-8cd9-f0b74f22522d | 2026-02-17 04:58:42.380784 | orchestrator | | image | N/A (booted from volume) | 2026-02-17 04:58:42.380795 | orchestrator | | key_name | test | 2026-02-17 04:58:42.380804 | orchestrator | | locked | False | 2026-02-17 04:58:42.380814 | orchestrator | | locked_reason | None | 2026-02-17 04:58:42.380825 | orchestrator | | name | test-3 | 2026-02-17 04:58:42.380834 | orchestrator | | pinned_availability_zone | None | 2026-02-17 04:58:42.380843 | orchestrator | | progress | 0 | 2026-02-17 04:58:42.380853 | orchestrator | | project_id | 780d5413cc48488faeb2ea4bad49b534 | 2026-02-17 04:58:42.380867 | orchestrator | | properties | hostname='test-3' | 2026-02-17 04:58:42.380882 | orchestrator | | security_groups | name='icmp' | 2026-02-17 04:58:42.380914 | orchestrator | | | name='ssh' | 2026-02-17 04:58:42.380924 | orchestrator | | server_groups | None | 2026-02-17 04:58:42.380932 | orchestrator | | status | ACTIVE | 2026-02-17 04:58:42.380940 | orchestrator | | tags | test | 2026-02-17 04:58:42.380949 | orchestrator | | trusted_image_certificates | None | 2026-02-17 04:58:42.380969 | orchestrator | | updated | 2026-02-17T04:57:39Z | 2026-02-17 04:58:42.380977 | orchestrator | | user_id | ac83cd3fa01f4e17931d753a05c96418 | 2026-02-17 04:58:42.380985 | orchestrator | | volumes_attached | delete_on_termination='True', id='db6f2eb4-a629-4423-b6ec-52c7292a25e7' | 2026-02-17 04:58:42.384028 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:42.651212 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-17 04:58:45.635669 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:45.635814 | orchestrator | | Field | Value | 2026-02-17 04:58:45.635833 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:45.635845 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-17 04:58:45.635857 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-17 04:58:45.635869 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-17 04:58:45.635880 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-17 04:58:45.635891 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-17 04:58:45.635969 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-17 04:58:45.636002 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-17 04:58:45.636014 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-17 04:58:45.636032 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-17 04:58:45.636044 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-17 04:58:45.636055 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-17 04:58:45.636066 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-17 04:58:45.636078 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-17 04:58:45.636089 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-17 04:58:45.636109 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-17 04:58:45.636120 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-17T04:57:18.000000 | 2026-02-17 04:58:45.636139 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-17 04:58:45.636151 | orchestrator | | accessIPv4 | | 2026-02-17 04:58:45.636167 | orchestrator | | accessIPv6 | | 2026-02-17 04:58:45.636178 | orchestrator | | addresses | test=192.168.112.145, 192.168.200.171 | 2026-02-17 04:58:45.636190 | orchestrator | | config_drive | | 2026-02-17 04:58:45.636201 | orchestrator | | created | 2026-02-17T04:56:51Z | 2026-02-17 04:58:45.636214 | orchestrator | | description | None | 2026-02-17 04:58:45.636245 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-17 04:58:45.636266 | orchestrator | | hostId | a08ea271eff16ecaf38be15256732f1042b83e209dbf4c3d617186a4 | 2026-02-17 04:58:45.636287 | orchestrator | | host_status | None | 2026-02-17 04:58:45.636317 | orchestrator | | id | 0addbfd9-2657-4638-abb6-da06b9b1b05d | 2026-02-17 04:58:45.636337 | orchestrator | | image | N/A (booted from volume) | 2026-02-17 04:58:45.636364 | orchestrator | | key_name | test | 2026-02-17 04:58:45.636378 | orchestrator | | locked | False | 2026-02-17 04:58:45.636391 | orchestrator | | locked_reason | None | 2026-02-17 04:58:45.636403 | orchestrator | | name | test-4 | 2026-02-17 04:58:45.636424 | orchestrator | | pinned_availability_zone | None | 2026-02-17 04:58:45.636438 | orchestrator | | progress | 0 | 2026-02-17 04:58:45.636450 | orchestrator | | project_id | 780d5413cc48488faeb2ea4bad49b534 | 2026-02-17 04:58:45.636463 | orchestrator | | properties | hostname='test-4' | 2026-02-17 04:58:45.636484 | orchestrator | | security_groups | name='icmp' | 2026-02-17 04:58:45.636502 | orchestrator | | | name='ssh' | 2026-02-17 04:58:45.636517 | orchestrator | | server_groups | None | 2026-02-17 04:58:45.636530 | orchestrator | | status | ACTIVE | 2026-02-17 04:58:45.636543 | orchestrator | | tags | test | 2026-02-17 04:58:45.636555 | orchestrator | | trusted_image_certificates | None | 2026-02-17 04:58:45.636575 | orchestrator | | updated | 2026-02-17T04:57:40Z | 2026-02-17 04:58:45.636588 | orchestrator | | user_id | ac83cd3fa01f4e17931d753a05c96418 | 2026-02-17 04:58:45.636599 | orchestrator | | volumes_attached | delete_on_termination='True', id='5e23e2bb-8d26-4f5d-a214-ef0bdf333765' | 2026-02-17 04:58:45.640318 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-17 04:58:45.899858 | orchestrator | + server_ping 2026-02-17 04:58:45.902161 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-17 04:58:45.902216 | orchestrator | ++ tr -d '\r' 2026-02-17 04:58:48.768460 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-17 04:58:48.768554 | orchestrator | + ping -c3 192.168.112.143 2026-02-17 04:58:48.789624 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-02-17 04:58:48.789729 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=13.4 ms 2026-02-17 04:58:49.779549 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.59 ms 2026-02-17 04:58:50.779968 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=2.09 ms 2026-02-17 04:58:50.780100 | orchestrator | 2026-02-17 04:58:50.780131 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-02-17 04:58:50.780153 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-17 04:58:50.780170 | orchestrator | rtt min/avg/max/mdev = 2.091/6.011/13.358/5.198 ms 2026-02-17 04:58:50.781148 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-17 04:58:50.781180 | orchestrator | + ping -c3 192.168.112.115 2026-02-17 04:58:50.794643 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2026-02-17 04:58:50.794701 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=9.71 ms 2026-02-17 04:58:51.788585 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.44 ms 2026-02-17 04:58:52.789924 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=2.01 ms 2026-02-17 04:58:52.790003 | orchestrator | 2026-02-17 04:58:52.790040 | orchestrator | --- 192.168.112.115 ping statistics --- 2026-02-17 04:58:52.790050 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-17 04:58:52.790058 | orchestrator | rtt min/avg/max/mdev = 2.006/4.717/9.710/3.534 ms 2026-02-17 04:58:52.791060 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-17 04:58:52.791100 | orchestrator | + ping -c3 192.168.112.145 2026-02-17 04:58:52.802769 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2026-02-17 04:58:52.802815 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=7.68 ms 2026-02-17 04:58:53.799356 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=2.57 ms 2026-02-17 04:58:54.800795 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=2.18 ms 2026-02-17 04:58:54.800893 | orchestrator | 2026-02-17 04:58:54.800938 | orchestrator | --- 192.168.112.145 ping statistics --- 2026-02-17 04:58:54.800953 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-17 04:58:54.801055 | orchestrator | rtt min/avg/max/mdev = 2.184/4.142/7.679/2.505 ms 2026-02-17 04:58:54.801463 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-17 04:58:54.801489 | orchestrator | + ping -c3 192.168.112.106 2026-02-17 04:58:54.816598 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2026-02-17 04:58:54.816675 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=9.89 ms 2026-02-17 04:58:55.808976 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=1.82 ms 2026-02-17 04:58:56.810810 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.66 ms 2026-02-17 04:58:56.810967 | orchestrator | 2026-02-17 04:58:56.810993 | orchestrator | --- 192.168.112.106 ping statistics --- 2026-02-17 04:58:56.811011 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-17 04:58:56.811030 | orchestrator | rtt min/avg/max/mdev = 1.660/4.455/9.890/3.843 ms 2026-02-17 04:58:56.811050 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-17 04:58:56.811068 | orchestrator | + ping -c3 192.168.112.113 2026-02-17 04:58:56.824324 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2026-02-17 04:58:56.824403 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=10.1 ms 2026-02-17 04:58:57.818242 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=2.39 ms 2026-02-17 04:58:58.820602 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=1.99 ms 2026-02-17 04:58:58.820742 | orchestrator | 2026-02-17 04:58:58.820986 | orchestrator | --- 192.168.112.113 ping statistics --- 2026-02-17 04:58:58.821018 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-17 04:58:58.821039 | orchestrator | rtt min/avg/max/mdev = 1.993/4.837/10.130/3.745 ms 2026-02-17 04:58:58.821074 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-17 04:58:58.918055 | orchestrator | ok: Runtime: 0:08:35.581408 2026-02-17 04:58:58.956475 | 2026-02-17 04:58:58.956608 | TASK [Run tempest] 2026-02-17 04:58:59.490083 | orchestrator | skipping: Conditional result was False 2026-02-17 04:58:59.504571 | 2026-02-17 04:58:59.504708 | TASK [Check prometheus alert status] 2026-02-17 04:59:00.044001 | orchestrator | skipping: Conditional result was False 2026-02-17 04:59:00.056822 | 2026-02-17 04:59:00.056963 | PLAY [Upgrade testbed] 2026-02-17 04:59:00.067383 | 2026-02-17 04:59:00.067509 | TASK [Print next ceph version] 2026-02-17 04:59:00.145908 | orchestrator | ok 2026-02-17 04:59:00.156146 | 2026-02-17 04:59:00.156337 | TASK [Print next openstack version] 2026-02-17 04:59:00.224538 | orchestrator | ok 2026-02-17 04:59:00.235642 | 2026-02-17 04:59:00.235769 | TASK [Print next manager version] 2026-02-17 04:59:00.300762 | orchestrator | ok 2026-02-17 04:59:00.312961 | 2026-02-17 04:59:00.313119 | TASK [Set cloud fact (Zuul deployment)] 2026-02-17 04:59:00.371464 | orchestrator | ok 2026-02-17 04:59:00.383599 | 2026-02-17 04:59:00.383757 | TASK [Set cloud fact (local deployment)] 2026-02-17 04:59:00.419135 | orchestrator | skipping: Conditional result was False 2026-02-17 04:59:00.433174 | 2026-02-17 04:59:00.433381 | TASK [Fetch manager address] 2026-02-17 04:59:00.706529 | orchestrator | ok 2026-02-17 04:59:00.716654 | 2026-02-17 04:59:00.716813 | TASK [Set manager_host address] 2026-02-17 04:59:00.788086 | orchestrator | ok 2026-02-17 04:59:00.796210 | 2026-02-17 04:59:00.796358 | TASK [Run upgrade] 2026-02-17 04:59:01.504411 | orchestrator | + set -e 2026-02-17 04:59:01.504556 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-17 04:59:01.504571 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-17 04:59:01.504586 | orchestrator | + CEPH_VERSION=reef 2026-02-17 04:59:01.504595 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-17 04:59:01.504603 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-17 04:59:01.504619 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-17 04:59:01.510510 | orchestrator | + set -e 2026-02-17 04:59:01.510624 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 04:59:01.510640 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 04:59:01.510657 | orchestrator | ++ INTERACTIVE=false 2026-02-17 04:59:01.510668 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 04:59:01.510686 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 04:59:01.511755 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-17 04:59:01.554113 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-17 04:59:01.555342 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-17 04:59:01.596029 | orchestrator | 2026-02-17 04:59:01.596132 | orchestrator | # UPGRADE MANAGER 2026-02-17 04:59:01.596153 | orchestrator | 2026-02-17 04:59:01.596166 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-17 04:59:01.596179 | orchestrator | + echo 2026-02-17 04:59:01.596191 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-17 04:59:01.596205 | orchestrator | + echo 2026-02-17 04:59:01.596217 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-17 04:59:01.596230 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-17 04:59:01.596241 | orchestrator | + CEPH_VERSION=reef 2026-02-17 04:59:01.596253 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-17 04:59:01.596265 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-17 04:59:01.596277 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-17 04:59:01.602376 | orchestrator | + set -e 2026-02-17 04:59:01.602468 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-17 04:59:01.602483 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-17 04:59:01.606975 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-17 04:59:01.607022 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-17 04:59:01.612411 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-17 04:59:01.617113 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-17 04:59:01.623502 | orchestrator | /opt/configuration ~ 2026-02-17 04:59:01.623543 | orchestrator | + set -e 2026-02-17 04:59:01.623553 | orchestrator | + pushd /opt/configuration 2026-02-17 04:59:01.623563 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-17 04:59:01.623574 | orchestrator | + source /opt/venv/bin/activate 2026-02-17 04:59:01.624541 | orchestrator | ++ deactivate nondestructive 2026-02-17 04:59:01.624581 | orchestrator | ++ '[' -n '' ']' 2026-02-17 04:59:01.624586 | orchestrator | ++ '[' -n '' ']' 2026-02-17 04:59:01.624590 | orchestrator | ++ hash -r 2026-02-17 04:59:01.624600 | orchestrator | ++ '[' -n '' ']' 2026-02-17 04:59:01.624605 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-17 04:59:01.624609 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-17 04:59:01.624614 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-17 04:59:01.624620 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-17 04:59:01.624625 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-17 04:59:01.624629 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-17 04:59:01.624634 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-17 04:59:01.624644 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 04:59:01.624649 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 04:59:01.624654 | orchestrator | ++ export PATH 2026-02-17 04:59:01.624658 | orchestrator | ++ '[' -n '' ']' 2026-02-17 04:59:01.624662 | orchestrator | ++ '[' -z '' ']' 2026-02-17 04:59:01.624666 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-17 04:59:01.624671 | orchestrator | ++ PS1='(venv) ' 2026-02-17 04:59:01.624675 | orchestrator | ++ export PS1 2026-02-17 04:59:01.624679 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-17 04:59:01.624683 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-17 04:59:01.624687 | orchestrator | ++ hash -r 2026-02-17 04:59:01.624694 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-17 04:59:02.727000 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-17 04:59:02.728470 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-17 04:59:02.730437 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-17 04:59:02.732151 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-17 04:59:02.733599 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-17 04:59:02.743947 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-17 04:59:02.745292 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-17 04:59:02.746571 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-17 04:59:02.747730 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-17 04:59:02.779607 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-17 04:59:02.780884 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-17 04:59:02.782709 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-17 04:59:02.784192 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-17 04:59:02.788086 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-17 04:59:02.996426 | orchestrator | ++ which gilt 2026-02-17 04:59:02.997485 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-17 04:59:02.997515 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-17 04:59:03.257489 | orchestrator | osism.cfg-generics: 2026-02-17 04:59:03.360618 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-17 04:59:03.361594 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-17 04:59:03.364305 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-17 04:59:03.364342 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-17 04:59:04.143142 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-17 04:59:04.157288 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-17 04:59:04.490217 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-17 04:59:04.540071 | orchestrator | ~ 2026-02-17 04:59:04.540184 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-17 04:59:04.540212 | orchestrator | + deactivate 2026-02-17 04:59:04.540234 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-17 04:59:04.540266 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 04:59:04.540286 | orchestrator | + export PATH 2026-02-17 04:59:04.540304 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-17 04:59:04.540323 | orchestrator | + '[' -n '' ']' 2026-02-17 04:59:04.540341 | orchestrator | + hash -r 2026-02-17 04:59:04.540358 | orchestrator | + '[' -n '' ']' 2026-02-17 04:59:04.540377 | orchestrator | + unset VIRTUAL_ENV 2026-02-17 04:59:04.540394 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-17 04:59:04.540406 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-17 04:59:04.540417 | orchestrator | + unset -f deactivate 2026-02-17 04:59:04.540428 | orchestrator | + popd 2026-02-17 04:59:04.541907 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-17 04:59:04.541991 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-17 04:59:04.551509 | orchestrator | + set -e 2026-02-17 04:59:04.551584 | orchestrator | + NAMESPACE=kolla/release 2026-02-17 04:59:04.551600 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-17 04:59:04.560652 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-17 04:59:04.571137 | orchestrator | /opt/configuration ~ 2026-02-17 04:59:04.571242 | orchestrator | + set -e 2026-02-17 04:59:04.571265 | orchestrator | + pushd /opt/configuration 2026-02-17 04:59:04.571290 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-17 04:59:04.571306 | orchestrator | + source /opt/venv/bin/activate 2026-02-17 04:59:04.571321 | orchestrator | ++ deactivate nondestructive 2026-02-17 04:59:04.571337 | orchestrator | ++ '[' -n '' ']' 2026-02-17 04:59:04.571352 | orchestrator | ++ '[' -n '' ']' 2026-02-17 04:59:04.571367 | orchestrator | ++ hash -r 2026-02-17 04:59:04.571381 | orchestrator | ++ '[' -n '' ']' 2026-02-17 04:59:04.571397 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-17 04:59:04.571414 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-17 04:59:04.571446 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-17 04:59:04.571474 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-17 04:59:04.571490 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-17 04:59:04.571505 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-17 04:59:04.571526 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-17 04:59:04.571543 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 04:59:04.571563 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 04:59:04.571712 | orchestrator | ++ export PATH 2026-02-17 04:59:04.571744 | orchestrator | ++ '[' -n '' ']' 2026-02-17 04:59:04.571761 | orchestrator | ++ '[' -z '' ']' 2026-02-17 04:59:04.571775 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-17 04:59:04.571791 | orchestrator | ++ PS1='(venv) ' 2026-02-17 04:59:04.571811 | orchestrator | ++ export PS1 2026-02-17 04:59:04.571825 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-17 04:59:04.571840 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-17 04:59:04.571855 | orchestrator | ++ hash -r 2026-02-17 04:59:04.572080 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-17 04:59:05.071558 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-17 04:59:05.072259 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-17 04:59:05.073880 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-17 04:59:05.075245 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-17 04:59:05.076560 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-17 04:59:05.086556 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-17 04:59:05.088729 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-17 04:59:05.090390 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-17 04:59:05.090765 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-17 04:59:05.123417 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-17 04:59:05.124879 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-17 04:59:05.126792 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-17 04:59:05.128227 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-17 04:59:05.132394 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-17 04:59:05.355180 | orchestrator | ++ which gilt 2026-02-17 04:59:05.356559 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-17 04:59:05.356585 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-17 04:59:05.512630 | orchestrator | osism.cfg-generics: 2026-02-17 04:59:05.588411 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-17 04:59:05.588847 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-17 04:59:05.589465 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-17 04:59:05.589673 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-17 04:59:06.058464 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-17 04:59:06.072545 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-17 04:59:06.450514 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-17 04:59:06.505499 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-17 04:59:06.505579 | orchestrator | + deactivate 2026-02-17 04:59:06.505606 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-17 04:59:06.505614 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-17 04:59:06.505620 | orchestrator | + export PATH 2026-02-17 04:59:06.505625 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-17 04:59:06.505631 | orchestrator | + '[' -n '' ']' 2026-02-17 04:59:06.505637 | orchestrator | + hash -r 2026-02-17 04:59:06.505651 | orchestrator | + '[' -n '' ']' 2026-02-17 04:59:06.505656 | orchestrator | + unset VIRTUAL_ENV 2026-02-17 04:59:06.505662 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-17 04:59:06.505668 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-17 04:59:06.505673 | orchestrator | + unset -f deactivate 2026-02-17 04:59:06.505679 | orchestrator | + popd 2026-02-17 04:59:06.505783 | orchestrator | ~ 2026-02-17 04:59:06.507575 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-17 04:59:06.562307 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-17 04:59:06.562914 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-17 04:59:06.657110 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 04:59:06.657200 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-17 04:59:06.662878 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-17 04:59:06.670704 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-17 04:59:06.749567 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-17 04:59:06.750330 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-17 04:59:06.857457 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-17 04:59:06.857561 | orchestrator | ++ echo true 2026-02-17 04:59:06.858221 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-17 04:59:06.859472 | orchestrator | +++ semver 2024.2 2024.2 2026-02-17 04:59:06.946510 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-17 04:59:06.947221 | orchestrator | +++ semver 2024.2 2025.1 2026-02-17 04:59:07.009292 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-17 04:59:07.009369 | orchestrator | ++ echo false 2026-02-17 04:59:07.010427 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-17 04:59:07.010446 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-17 04:59:07.010454 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-17 04:59:07.010460 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-17 04:59:07.010467 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-17 04:59:07.018580 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-17 04:59:07.019701 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-17 04:59:07.042196 | orchestrator | export RABBITMQ3TO4=true 2026-02-17 04:59:07.046267 | orchestrator | + osism update manager 2026-02-17 04:59:12.746254 | orchestrator | Collecting uv 2026-02-17 04:59:12.841981 | orchestrator | Downloading uv-0.10.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-17 04:59:12.862072 | orchestrator | Downloading uv-0.10.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.1 MB) 2026-02-17 04:59:13.922971 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.1/23.1 MB 23.4 MB/s eta 0:00:00 2026-02-17 04:59:13.980445 | orchestrator | Installing collected packages: uv 2026-02-17 04:59:14.459724 | orchestrator | Successfully installed uv-0.10.3 2026-02-17 04:59:15.039512 | orchestrator | Resolved 11 packages in 306ms 2026-02-17 04:59:15.061855 | orchestrator | Downloading cryptography (4.3MiB) 2026-02-17 04:59:15.086838 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-17 04:59:15.086970 | orchestrator | Downloading ansible (54.5MiB) 2026-02-17 04:59:15.086986 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-17 04:59:15.450215 | orchestrator | Downloaded netaddr 2026-02-17 04:59:15.549287 | orchestrator | Downloaded cryptography 2026-02-17 04:59:15.590663 | orchestrator | Downloaded ansible-core 2026-02-17 04:59:21.994358 | orchestrator | Downloaded ansible 2026-02-17 04:59:21.994559 | orchestrator | Prepared 11 packages in 6.95s 2026-02-17 04:59:22.512726 | orchestrator | Installed 11 packages in 515ms 2026-02-17 04:59:22.512816 | orchestrator | + ansible==11.11.0 2026-02-17 04:59:22.512830 | orchestrator | + ansible-core==2.18.13 2026-02-17 04:59:22.512841 | orchestrator | + cffi==2.0.0 2026-02-17 04:59:22.512852 | orchestrator | + cryptography==46.0.5 2026-02-17 04:59:22.512862 | orchestrator | + jinja2==3.1.6 2026-02-17 04:59:22.513339 | orchestrator | + markupsafe==3.0.3 2026-02-17 04:59:22.513360 | orchestrator | + netaddr==1.3.0 2026-02-17 04:59:22.513371 | orchestrator | + packaging==26.0 2026-02-17 04:59:22.513381 | orchestrator | + pycparser==3.0 2026-02-17 04:59:22.513391 | orchestrator | + pyyaml==6.0.3 2026-02-17 04:59:22.513402 | orchestrator | + resolvelib==1.0.1 2026-02-17 04:59:23.697143 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-200287m5bi0x72/tmph9cae276/ansible-collection-serviceso1ae5dpy'... 2026-02-17 04:59:24.951898 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-17 04:59:24.952064 | orchestrator | Already on 'main' 2026-02-17 04:59:25.425007 | orchestrator | Starting galaxy collection install process 2026-02-17 04:59:25.425124 | orchestrator | Process install dependency map 2026-02-17 04:59:25.425147 | orchestrator | Starting collection install process 2026-02-17 04:59:25.425167 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-17 04:59:25.425187 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-17 04:59:25.425207 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-17 04:59:25.929045 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-2004907gmyeg7m/tmpdq8_kdqe/ansible-playbooks-managerbirvrwps'... 2026-02-17 04:59:26.539456 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-17 04:59:26.539562 | orchestrator | Already on 'main' 2026-02-17 04:59:26.805436 | orchestrator | Starting galaxy collection install process 2026-02-17 04:59:26.805524 | orchestrator | Process install dependency map 2026-02-17 04:59:26.805536 | orchestrator | Starting collection install process 2026-02-17 04:59:26.805545 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-17 04:59:26.805552 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-17 04:59:26.805559 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-17 04:59:27.502724 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-17 04:59:27.502796 | orchestrator | -vvvv to see details 2026-02-17 04:59:27.942428 | orchestrator | 2026-02-17 04:59:27.942527 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-17 04:59:27.942543 | orchestrator | 2026-02-17 04:59:27.942555 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-17 04:59:32.198108 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:32.198244 | orchestrator | 2026-02-17 04:59:32.198260 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-17 04:59:32.276678 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 04:59:32.276810 | orchestrator | 2026-02-17 04:59:32.276853 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-17 04:59:34.250919 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:34.251114 | orchestrator | 2026-02-17 04:59:34.251133 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-17 04:59:34.323487 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:34.323640 | orchestrator | 2026-02-17 04:59:34.323657 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-17 04:59:34.429602 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-17 04:59:34.429726 | orchestrator | 2026-02-17 04:59:34.429742 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-17 04:59:38.649412 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-17 04:59:38.649542 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-17 04:59:38.649557 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-17 04:59:38.649586 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-17 04:59:38.649598 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-17 04:59:38.649609 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-17 04:59:38.649620 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-17 04:59:38.649632 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-17 04:59:38.649644 | orchestrator | 2026-02-17 04:59:38.649656 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-17 04:59:39.708945 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:39.709146 | orchestrator | 2026-02-17 04:59:39.709164 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-17 04:59:40.689838 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:40.690004 | orchestrator | 2026-02-17 04:59:40.690094 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-17 04:59:40.782250 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-17 04:59:40.782374 | orchestrator | 2026-02-17 04:59:40.782396 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-17 04:59:42.613200 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-17 04:59:42.613316 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-17 04:59:42.613332 | orchestrator | 2026-02-17 04:59:42.613346 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-17 04:59:43.622602 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:43.622704 | orchestrator | 2026-02-17 04:59:43.622722 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-17 04:59:43.692602 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:59:43.692689 | orchestrator | 2026-02-17 04:59:43.692703 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-17 04:59:43.780207 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-17 04:59:43.780331 | orchestrator | 2026-02-17 04:59:43.780359 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-17 04:59:44.686105 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:44.686178 | orchestrator | 2026-02-17 04:59:44.686185 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-17 04:59:44.753808 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-17 04:59:44.753928 | orchestrator | 2026-02-17 04:59:44.753945 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-17 04:59:46.711742 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-17 04:59:46.711834 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-17 04:59:46.711847 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:46.711857 | orchestrator | 2026-02-17 04:59:46.711865 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-17 04:59:47.606341 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:47.606438 | orchestrator | 2026-02-17 04:59:47.606454 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-17 04:59:47.671515 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:59:47.671615 | orchestrator | 2026-02-17 04:59:47.671630 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-17 04:59:47.782580 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-17 04:59:47.782673 | orchestrator | 2026-02-17 04:59:47.782688 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-17 04:59:48.427192 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:48.427291 | orchestrator | 2026-02-17 04:59:48.427306 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-17 04:59:49.000937 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:49.001090 | orchestrator | 2026-02-17 04:59:49.001109 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-17 04:59:50.888418 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-17 04:59:50.888518 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-17 04:59:50.888532 | orchestrator | 2026-02-17 04:59:50.888545 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-17 04:59:52.080728 | orchestrator | changed: [testbed-manager] 2026-02-17 04:59:52.080848 | orchestrator | 2026-02-17 04:59:52.080870 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-17 04:59:52.655887 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:52.656068 | orchestrator | 2026-02-17 04:59:52.656087 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-17 04:59:53.246728 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:53.246842 | orchestrator | 2026-02-17 04:59:53.246896 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-17 04:59:53.307562 | orchestrator | skipping: [testbed-manager] 2026-02-17 04:59:53.307662 | orchestrator | 2026-02-17 04:59:53.307678 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-17 04:59:53.381834 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-17 04:59:53.381923 | orchestrator | 2026-02-17 04:59:53.381937 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-17 04:59:53.436118 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:53.436202 | orchestrator | 2026-02-17 04:59:53.436216 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-17 04:59:56.528810 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-17 04:59:56.528944 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-17 04:59:56.529025 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-17 04:59:56.529041 | orchestrator | 2026-02-17 04:59:56.529054 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-17 04:59:57.620202 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:57.620288 | orchestrator | 2026-02-17 04:59:57.620299 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-17 04:59:58.616752 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:58.616862 | orchestrator | 2026-02-17 04:59:58.616878 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-17 04:59:59.579319 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:59.579417 | orchestrator | 2026-02-17 04:59:59.579431 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-17 04:59:59.647354 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-17 04:59:59.647481 | orchestrator | 2026-02-17 04:59:59.647541 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-17 04:59:59.712658 | orchestrator | ok: [testbed-manager] 2026-02-17 04:59:59.712756 | orchestrator | 2026-02-17 04:59:59.712772 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-17 05:00:00.700207 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-17 05:00:00.700292 | orchestrator | 2026-02-17 05:00:00.700302 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-17 05:00:00.789548 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-17 05:00:00.789649 | orchestrator | 2026-02-17 05:00:00.789665 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-17 05:00:01.832747 | orchestrator | ok: [testbed-manager] 2026-02-17 05:00:01.832856 | orchestrator | 2026-02-17 05:00:01.832874 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-17 05:00:02.922007 | orchestrator | ok: [testbed-manager] 2026-02-17 05:00:02.922127 | orchestrator | 2026-02-17 05:00:02.922136 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-17 05:00:03.007661 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:00:03.007736 | orchestrator | 2026-02-17 05:00:03.007745 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-17 05:00:03.058783 | orchestrator | ok: [testbed-manager] 2026-02-17 05:00:03.058876 | orchestrator | 2026-02-17 05:00:03.058893 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-17 05:00:04.383962 | orchestrator | changed: [testbed-manager] 2026-02-17 05:00:04.384100 | orchestrator | 2026-02-17 05:00:04.384118 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-17 05:01:08.514210 | orchestrator | changed: [testbed-manager] 2026-02-17 05:01:08.514343 | orchestrator | 2026-02-17 05:01:08.514371 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-17 05:01:09.808796 | orchestrator | ok: [testbed-manager] 2026-02-17 05:01:09.808870 | orchestrator | 2026-02-17 05:01:09.808878 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-17 05:01:09.873932 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:01:09.874130 | orchestrator | 2026-02-17 05:01:09.874152 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-17 05:01:10.689307 | orchestrator | ok: [testbed-manager] 2026-02-17 05:01:10.689436 | orchestrator | 2026-02-17 05:01:10.689463 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-17 05:01:10.779567 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:01:10.779661 | orchestrator | 2026-02-17 05:01:10.779676 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-17 05:01:10.779688 | orchestrator | 2026-02-17 05:01:10.779698 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-17 05:01:29.085790 | orchestrator | changed: [testbed-manager] 2026-02-17 05:01:29.085893 | orchestrator | 2026-02-17 05:01:29.085907 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-17 05:02:29.144170 | orchestrator | Pausing for 60 seconds 2026-02-17 05:02:29.144292 | orchestrator | changed: [testbed-manager] 2026-02-17 05:02:29.144308 | orchestrator | 2026-02-17 05:02:29.144322 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-17 05:02:29.206365 | orchestrator | ok: [testbed-manager] 2026-02-17 05:02:29.206463 | orchestrator | 2026-02-17 05:02:29.206477 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-17 05:02:32.749085 | orchestrator | changed: [testbed-manager] 2026-02-17 05:02:32.749255 | orchestrator | 2026-02-17 05:02:32.749276 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-17 05:03:35.373974 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-17 05:03:35.374191 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-17 05:03:35.374212 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-17 05:03:35.374226 | orchestrator | changed: [testbed-manager] 2026-02-17 05:03:35.374239 | orchestrator | 2026-02-17 05:03:35.374252 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-17 05:03:46.616800 | orchestrator | changed: [testbed-manager] 2026-02-17 05:03:46.616907 | orchestrator | 2026-02-17 05:03:46.616922 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-17 05:03:46.702374 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-17 05:03:46.702495 | orchestrator | 2026-02-17 05:03:46.702511 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-17 05:03:46.702524 | orchestrator | 2026-02-17 05:03:46.702536 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-17 05:03:46.773413 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:03:46.773504 | orchestrator | 2026-02-17 05:03:46.773517 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-17 05:03:46.843622 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-17 05:03:46.843717 | orchestrator | 2026-02-17 05:03:46.843751 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-17 05:03:47.938540 | orchestrator | changed: [testbed-manager] 2026-02-17 05:03:47.938650 | orchestrator | 2026-02-17 05:03:47.938665 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-17 05:03:51.423544 | orchestrator | ok: [testbed-manager] 2026-02-17 05:03:51.423658 | orchestrator | 2026-02-17 05:03:51.423675 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-17 05:03:51.501018 | orchestrator | ok: [testbed-manager] => { 2026-02-17 05:03:51.501118 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-17 05:03:51.501134 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-17 05:03:51.501146 | orchestrator | "Checking running containers against expected versions...", 2026-02-17 05:03:51.501158 | orchestrator | "", 2026-02-17 05:03:51.501213 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-17 05:03:51.501225 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-17 05:03:51.501238 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501249 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-17 05:03:51.501260 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501271 | orchestrator | "", 2026-02-17 05:03:51.501282 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-17 05:03:51.501294 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-17 05:03:51.501305 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501317 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-17 05:03:51.501328 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501338 | orchestrator | "", 2026-02-17 05:03:51.501350 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-17 05:03:51.501361 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-17 05:03:51.501372 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501383 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-17 05:03:51.501394 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501404 | orchestrator | "", 2026-02-17 05:03:51.501415 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-17 05:03:51.501427 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-17 05:03:51.501437 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501448 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-17 05:03:51.501459 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501470 | orchestrator | "", 2026-02-17 05:03:51.501482 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-17 05:03:51.501493 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-17 05:03:51.501504 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501514 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-17 05:03:51.501525 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501536 | orchestrator | "", 2026-02-17 05:03:51.501550 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-17 05:03:51.501586 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.501600 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501614 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.501627 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501639 | orchestrator | "", 2026-02-17 05:03:51.501652 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-17 05:03:51.501665 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-17 05:03:51.501677 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501690 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-17 05:03:51.501704 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501717 | orchestrator | "", 2026-02-17 05:03:51.501729 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-17 05:03:51.501742 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-17 05:03:51.501755 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501777 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-17 05:03:51.501789 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501802 | orchestrator | "", 2026-02-17 05:03:51.501815 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-17 05:03:51.501827 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-17 05:03:51.501840 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501853 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-17 05:03:51.501865 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501878 | orchestrator | "", 2026-02-17 05:03:51.501920 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-17 05:03:51.501932 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-17 05:03:51.501944 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.501955 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-17 05:03:51.501966 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.501977 | orchestrator | "", 2026-02-17 05:03:51.501988 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-17 05:03:51.501998 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502009 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.502074 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502085 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.502097 | orchestrator | "", 2026-02-17 05:03:51.502108 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-17 05:03:51.502119 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502129 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.502140 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502151 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.502162 | orchestrator | "", 2026-02-17 05:03:51.502239 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-17 05:03:51.502251 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502262 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.502273 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502284 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.502295 | orchestrator | "", 2026-02-17 05:03:51.502306 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-17 05:03:51.502317 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502328 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.502338 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502371 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.502382 | orchestrator | "", 2026-02-17 05:03:51.502393 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-17 05:03:51.502404 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502426 | orchestrator | " Enabled: true", 2026-02-17 05:03:51.502437 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-17 05:03:51.502448 | orchestrator | " Status: ✅ MATCH", 2026-02-17 05:03:51.502459 | orchestrator | "", 2026-02-17 05:03:51.502469 | orchestrator | "=== Summary ===", 2026-02-17 05:03:51.502479 | orchestrator | "Errors (version mismatches): 0", 2026-02-17 05:03:51.502488 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-17 05:03:51.502498 | orchestrator | "", 2026-02-17 05:03:51.502508 | orchestrator | "✅ All running containers match expected versions!" 2026-02-17 05:03:51.502518 | orchestrator | ] 2026-02-17 05:03:51.502528 | orchestrator | } 2026-02-17 05:03:51.502538 | orchestrator | 2026-02-17 05:03:51.502548 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-17 05:03:51.566365 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:03:51.566481 | orchestrator | 2026-02-17 05:03:51.566499 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:03:51.566513 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-17 05:03:51.566525 | orchestrator | 2026-02-17 05:04:04.081990 | orchestrator | 2026-02-17 05:04:04 | INFO  | Task 1d2e4d76-cd6f-4ef6-862f-72bd34569530 (sync inventory) is running in background. Output coming soon. 2026-02-17 05:04:32.936381 | orchestrator | 2026-02-17 05:04:05 | INFO  | Starting group_vars file reorganization 2026-02-17 05:04:32.936524 | orchestrator | 2026-02-17 05:04:05 | INFO  | Moved 0 file(s) to their respective directories 2026-02-17 05:04:32.936549 | orchestrator | 2026-02-17 05:04:05 | INFO  | Group_vars file reorganization completed 2026-02-17 05:04:32.936594 | orchestrator | 2026-02-17 05:04:08 | INFO  | Starting variable preparation from inventory 2026-02-17 05:04:32.936613 | orchestrator | 2026-02-17 05:04:11 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-17 05:04:32.936632 | orchestrator | 2026-02-17 05:04:11 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-17 05:04:32.936650 | orchestrator | 2026-02-17 05:04:11 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-17 05:04:32.936668 | orchestrator | 2026-02-17 05:04:11 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-17 05:04:32.936685 | orchestrator | 2026-02-17 05:04:11 | INFO  | Variable preparation completed 2026-02-17 05:04:32.936704 | orchestrator | 2026-02-17 05:04:13 | INFO  | Starting inventory overwrite handling 2026-02-17 05:04:32.936722 | orchestrator | 2026-02-17 05:04:13 | INFO  | Handling group overwrites in 99-overwrite 2026-02-17 05:04:32.936740 | orchestrator | 2026-02-17 05:04:13 | INFO  | Removing group frr:children from 60-generic 2026-02-17 05:04:32.936759 | orchestrator | 2026-02-17 05:04:13 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-17 05:04:32.936777 | orchestrator | 2026-02-17 05:04:13 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-17 05:04:32.936796 | orchestrator | 2026-02-17 05:04:13 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-17 05:04:32.936813 | orchestrator | 2026-02-17 05:04:13 | INFO  | Handling group overwrites in 20-roles 2026-02-17 05:04:32.936830 | orchestrator | 2026-02-17 05:04:13 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-17 05:04:32.936847 | orchestrator | 2026-02-17 05:04:13 | INFO  | Removed 5 group(s) in total 2026-02-17 05:04:32.936865 | orchestrator | 2026-02-17 05:04:13 | INFO  | Inventory overwrite handling completed 2026-02-17 05:04:32.936884 | orchestrator | 2026-02-17 05:04:14 | INFO  | Starting merge of inventory files 2026-02-17 05:04:32.936901 | orchestrator | 2026-02-17 05:04:14 | INFO  | Inventory files merged successfully 2026-02-17 05:04:32.936951 | orchestrator | 2026-02-17 05:04:19 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-17 05:04:32.936970 | orchestrator | 2026-02-17 05:04:31 | INFO  | Successfully wrote ClusterShell configuration 2026-02-17 05:04:33.246877 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-17 05:04:33.246975 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-17 05:04:33.246991 | orchestrator | + local max_attempts=60 2026-02-17 05:04:33.247004 | orchestrator | + local name=kolla-ansible 2026-02-17 05:04:33.247016 | orchestrator | + local attempt_num=1 2026-02-17 05:04:33.247416 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-17 05:04:33.290549 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-17 05:04:33.290624 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-17 05:04:33.290633 | orchestrator | + local max_attempts=60 2026-02-17 05:04:33.290640 | orchestrator | + local name=osism-ansible 2026-02-17 05:04:33.290646 | orchestrator | + local attempt_num=1 2026-02-17 05:04:33.291538 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-17 05:04:33.328146 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-17 05:04:33.328279 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-17 05:04:33.519421 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-17 05:04:33.519519 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-17 05:04:33.519533 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-17 05:04:33.519545 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-17 05:04:33.519560 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-17 05:04:33.519571 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-17 05:04:33.519583 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-17 05:04:33.519593 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-17 05:04:33.519604 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 19 seconds ago 2026-02-17 05:04:33.519615 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-17 05:04:33.519626 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-17 05:04:33.519637 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-17 05:04:33.519647 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-17 05:04:33.519686 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-17 05:04:33.519698 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-17 05:04:33.519709 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-17 05:04:33.526297 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-17 05:04:33.526328 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-17 05:04:33.526340 | orchestrator | + osism apply facts 2026-02-17 05:04:45.835398 | orchestrator | 2026-02-17 05:04:45 | INFO  | Task f24c779f-dbf7-474b-9454-deb72d82faf7 (facts) was prepared for execution. 2026-02-17 05:04:45.835535 | orchestrator | 2026-02-17 05:04:45 | INFO  | It takes a moment until task f24c779f-dbf7-474b-9454-deb72d82faf7 (facts) has been started and output is visible here. 2026-02-17 05:05:09.273174 | orchestrator | 2026-02-17 05:05:09.273339 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-17 05:05:09.273358 | orchestrator | 2026-02-17 05:05:09.273370 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-17 05:05:09.273381 | orchestrator | Tuesday 17 February 2026 05:04:52 +0000 (0:00:02.305) 0:00:02.306 ****** 2026-02-17 05:05:09.273392 | orchestrator | ok: [testbed-manager] 2026-02-17 05:05:09.273403 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:05:09.273413 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:05:09.273423 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:05:09.273433 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:05:09.273443 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:05:09.273452 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:05:09.273462 | orchestrator | 2026-02-17 05:05:09.273472 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-17 05:05:09.273488 | orchestrator | Tuesday 17 February 2026 05:04:56 +0000 (0:00:03.604) 0:00:05.910 ****** 2026-02-17 05:05:09.273504 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:05:09.273522 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:05:09.273537 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:05:09.273554 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:05:09.273569 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:05:09.273587 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:05:09.273604 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:05:09.273622 | orchestrator | 2026-02-17 05:05:09.273632 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-17 05:05:09.273642 | orchestrator | 2026-02-17 05:05:09.273652 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-17 05:05:09.273662 | orchestrator | Tuesday 17 February 2026 05:04:58 +0000 (0:00:02.623) 0:00:08.533 ****** 2026-02-17 05:05:09.273672 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:05:09.273704 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:05:09.273717 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:05:09.273729 | orchestrator | ok: [testbed-manager] 2026-02-17 05:05:09.273747 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:05:09.273758 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:05:09.273771 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:05:09.273782 | orchestrator | 2026-02-17 05:05:09.273794 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-17 05:05:09.273806 | orchestrator | 2026-02-17 05:05:09.273818 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-17 05:05:09.273830 | orchestrator | Tuesday 17 February 2026 05:05:06 +0000 (0:00:07.311) 0:00:15.845 ****** 2026-02-17 05:05:09.273841 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:05:09.273882 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:05:09.273900 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:05:09.273917 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:05:09.273933 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:05:09.273950 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:05:09.273967 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:05:09.273983 | orchestrator | 2026-02-17 05:05:09.273993 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:05:09.274004 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 05:05:09.274068 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 05:05:09.274080 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 05:05:09.274090 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 05:05:09.274100 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 05:05:09.274110 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 05:05:09.274120 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 05:05:09.274129 | orchestrator | 2026-02-17 05:05:09.274139 | orchestrator | 2026-02-17 05:05:09.274149 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:05:09.274159 | orchestrator | Tuesday 17 February 2026 05:05:08 +0000 (0:00:02.744) 0:00:18.589 ****** 2026-02-17 05:05:09.274169 | orchestrator | =============================================================================== 2026-02-17 05:05:09.274179 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.31s 2026-02-17 05:05:09.274190 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.60s 2026-02-17 05:05:09.274199 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.74s 2026-02-17 05:05:09.274209 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.62s 2026-02-17 05:05:09.607978 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-17 05:05:09.704954 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 05:05:09.705440 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-17 05:05:09.746591 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-17 05:05:09.746692 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-17 05:05:09.751485 | orchestrator | + set -e 2026-02-17 05:05:09.751558 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-17 05:05:09.751573 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-17 05:05:09.761824 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-17 05:05:09.769987 | orchestrator | 2026-02-17 05:05:09.770110 | orchestrator | # UPGRADE SERVICES 2026-02-17 05:05:09.770126 | orchestrator | 2026-02-17 05:05:09.770138 | orchestrator | + set -e 2026-02-17 05:05:09.770149 | orchestrator | + echo 2026-02-17 05:05:09.770161 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-17 05:05:09.770172 | orchestrator | + echo 2026-02-17 05:05:09.770184 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 05:05:09.771360 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 05:05:09.771388 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 05:05:09.771399 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 05:05:09.771410 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 05:05:09.771421 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 05:05:09.771434 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 05:05:09.771445 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 05:05:09.771482 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 05:05:09.771494 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 05:05:09.771504 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 05:05:09.771515 | orchestrator | ++ export ARA=false 2026-02-17 05:05:09.771527 | orchestrator | ++ ARA=false 2026-02-17 05:05:09.771538 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 05:05:09.771548 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 05:05:09.771559 | orchestrator | ++ export TEMPEST=false 2026-02-17 05:05:09.771570 | orchestrator | ++ TEMPEST=false 2026-02-17 05:05:09.771581 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 05:05:09.771591 | orchestrator | ++ IS_ZUUL=true 2026-02-17 05:05:09.771602 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 05:05:09.771614 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 05:05:09.771625 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 05:05:09.771636 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 05:05:09.771647 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 05:05:09.771657 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 05:05:09.771668 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 05:05:09.771679 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 05:05:09.771690 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 05:05:09.771701 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 05:05:09.771711 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-17 05:05:09.771722 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-17 05:05:09.771733 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-17 05:05:09.771743 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-17 05:05:09.771755 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-17 05:05:09.780586 | orchestrator | + set -e 2026-02-17 05:05:09.780676 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 05:05:09.781611 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 05:05:09.781695 | orchestrator | ++ INTERACTIVE=false 2026-02-17 05:05:09.781709 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 05:05:09.781720 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 05:05:09.781730 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 05:05:09.781740 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 05:05:09.781750 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 05:05:09.781759 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 05:05:09.781769 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 05:05:09.781780 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 05:05:09.781790 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 05:05:09.781819 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 05:05:09.781830 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 05:05:09.781840 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 05:05:09.781850 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 05:05:09.781860 | orchestrator | ++ export ARA=false 2026-02-17 05:05:09.781870 | orchestrator | ++ ARA=false 2026-02-17 05:05:09.781880 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 05:05:09.781890 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 05:05:09.781900 | orchestrator | ++ export TEMPEST=false 2026-02-17 05:05:09.781910 | orchestrator | ++ TEMPEST=false 2026-02-17 05:05:09.781919 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 05:05:09.781929 | orchestrator | ++ IS_ZUUL=true 2026-02-17 05:05:09.782100 | orchestrator | 2026-02-17 05:05:09.782209 | orchestrator | # PULL IMAGES 2026-02-17 05:05:09.782220 | orchestrator | 2026-02-17 05:05:09.782254 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 05:05:09.782265 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 05:05:09.782275 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 05:05:09.782286 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 05:05:09.782296 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 05:05:09.782306 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 05:05:09.782316 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 05:05:09.782325 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 05:05:09.782335 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 05:05:09.782345 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 05:05:09.782355 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-17 05:05:09.782364 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-17 05:05:09.782374 | orchestrator | + echo 2026-02-17 05:05:09.782384 | orchestrator | + echo '# PULL IMAGES' 2026-02-17 05:05:09.782394 | orchestrator | + echo 2026-02-17 05:05:09.783345 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-17 05:05:09.849309 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 05:05:09.849398 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-17 05:05:11.975448 | orchestrator | 2026-02-17 05:05:11 | INFO  | Trying to run play pull-images in environment custom 2026-02-17 05:05:22.155336 | orchestrator | 2026-02-17 05:05:22 | INFO  | Task 0a1908c0-3770-46ab-bea3-96991bdb25b2 (pull-images) was prepared for execution. 2026-02-17 05:05:22.155445 | orchestrator | 2026-02-17 05:05:22 | INFO  | Task 0a1908c0-3770-46ab-bea3-96991bdb25b2 is running in background. No more output. Check ARA for logs. 2026-02-17 05:05:22.491778 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-17 05:05:22.505030 | orchestrator | + set -e 2026-02-17 05:05:22.505084 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 05:05:22.505102 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 05:05:22.505114 | orchestrator | ++ INTERACTIVE=false 2026-02-17 05:05:22.505121 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 05:05:22.505129 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 05:05:22.505137 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-17 05:05:22.506772 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-17 05:05:22.519620 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-17 05:05:22.519700 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-17 05:05:22.520406 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-17 05:05:22.575982 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-17 05:05:22.576072 | orchestrator | + osism apply frr 2026-02-17 05:05:34.843183 | orchestrator | 2026-02-17 05:05:34 | INFO  | Task 9362cd06-2b11-4d1b-8b72-659d8d975199 (frr) was prepared for execution. 2026-02-17 05:05:34.843353 | orchestrator | 2026-02-17 05:05:34 | INFO  | It takes a moment until task 9362cd06-2b11-4d1b-8b72-659d8d975199 (frr) has been started and output is visible here. 2026-02-17 05:06:07.411604 | orchestrator | 2026-02-17 05:06:07.411720 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-17 05:06:07.411737 | orchestrator | 2026-02-17 05:06:07.411750 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-17 05:06:07.411761 | orchestrator | Tuesday 17 February 2026 05:05:43 +0000 (0:00:03.461) 0:00:03.461 ****** 2026-02-17 05:06:07.411773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 05:06:07.411787 | orchestrator | 2026-02-17 05:06:07.411798 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-17 05:06:07.411816 | orchestrator | Tuesday 17 February 2026 05:05:44 +0000 (0:00:01.824) 0:00:05.285 ****** 2026-02-17 05:06:07.411835 | orchestrator | ok: [testbed-manager] 2026-02-17 05:06:07.411856 | orchestrator | 2026-02-17 05:06:07.411874 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-17 05:06:07.411895 | orchestrator | Tuesday 17 February 2026 05:05:47 +0000 (0:00:02.408) 0:00:07.694 ****** 2026-02-17 05:06:07.411914 | orchestrator | ok: [testbed-manager] 2026-02-17 05:06:07.411933 | orchestrator | 2026-02-17 05:06:07.411953 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-17 05:06:07.411967 | orchestrator | Tuesday 17 February 2026 05:05:50 +0000 (0:00:03.181) 0:00:10.876 ****** 2026-02-17 05:06:07.411978 | orchestrator | ok: [testbed-manager] 2026-02-17 05:06:07.411990 | orchestrator | 2026-02-17 05:06:07.412002 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-17 05:06:07.412013 | orchestrator | Tuesday 17 February 2026 05:05:52 +0000 (0:00:01.950) 0:00:12.826 ****** 2026-02-17 05:06:07.412024 | orchestrator | ok: [testbed-manager] 2026-02-17 05:06:07.412035 | orchestrator | 2026-02-17 05:06:07.412046 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-17 05:06:07.412057 | orchestrator | Tuesday 17 February 2026 05:05:54 +0000 (0:00:01.955) 0:00:14.782 ****** 2026-02-17 05:06:07.412068 | orchestrator | ok: [testbed-manager] 2026-02-17 05:06:07.412079 | orchestrator | 2026-02-17 05:06:07.412090 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-17 05:06:07.412102 | orchestrator | Tuesday 17 February 2026 05:05:56 +0000 (0:00:02.347) 0:00:17.129 ****** 2026-02-17 05:06:07.412113 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:06:07.412156 | orchestrator | 2026-02-17 05:06:07.412171 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-17 05:06:07.412185 | orchestrator | Tuesday 17 February 2026 05:05:57 +0000 (0:00:01.094) 0:00:18.223 ****** 2026-02-17 05:06:07.412198 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:06:07.412209 | orchestrator | 2026-02-17 05:06:07.412221 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-17 05:06:07.412232 | orchestrator | Tuesday 17 February 2026 05:05:59 +0000 (0:00:01.156) 0:00:19.380 ****** 2026-02-17 05:06:07.412243 | orchestrator | ok: [testbed-manager] 2026-02-17 05:06:07.412254 | orchestrator | 2026-02-17 05:06:07.412305 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-17 05:06:07.412320 | orchestrator | Tuesday 17 February 2026 05:06:01 +0000 (0:00:02.012) 0:00:21.393 ****** 2026-02-17 05:06:07.412331 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-17 05:06:07.412342 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-17 05:06:07.412354 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-17 05:06:07.412366 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-17 05:06:07.412377 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-17 05:06:07.412388 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-17 05:06:07.412399 | orchestrator | 2026-02-17 05:06:07.412427 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-17 05:06:07.412439 | orchestrator | Tuesday 17 February 2026 05:06:04 +0000 (0:00:03.488) 0:00:24.882 ****** 2026-02-17 05:06:07.412451 | orchestrator | ok: [testbed-manager] 2026-02-17 05:06:07.412462 | orchestrator | 2026-02-17 05:06:07.412473 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:06:07.412485 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 05:06:07.412496 | orchestrator | 2026-02-17 05:06:07.412507 | orchestrator | 2026-02-17 05:06:07.412518 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:06:07.412529 | orchestrator | Tuesday 17 February 2026 05:06:07 +0000 (0:00:02.521) 0:00:27.403 ****** 2026-02-17 05:06:07.412540 | orchestrator | =============================================================================== 2026-02-17 05:06:07.412551 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.49s 2026-02-17 05:06:07.412562 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.18s 2026-02-17 05:06:07.412573 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.52s 2026-02-17 05:06:07.412584 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.41s 2026-02-17 05:06:07.412595 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.35s 2026-02-17 05:06:07.412606 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.01s 2026-02-17 05:06:07.412617 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.96s 2026-02-17 05:06:07.412628 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.95s 2026-02-17 05:06:07.412657 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.82s 2026-02-17 05:06:07.412669 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.16s 2026-02-17 05:06:07.412680 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.09s 2026-02-17 05:06:07.730207 | orchestrator | + osism apply kubernetes 2026-02-17 05:06:09.840049 | orchestrator | 2026-02-17 05:06:09 | INFO  | Task 052495a9-0311-4406-a7b8-48283a2d1399 (kubernetes) was prepared for execution. 2026-02-17 05:06:09.840177 | orchestrator | 2026-02-17 05:06:09 | INFO  | It takes a moment until task 052495a9-0311-4406-a7b8-48283a2d1399 (kubernetes) has been started and output is visible here. 2026-02-17 05:06:51.749644 | orchestrator | 2026-02-17 05:06:51.749781 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-17 05:06:51.749800 | orchestrator | 2026-02-17 05:06:51.749811 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-17 05:06:51.749823 | orchestrator | Tuesday 17 February 2026 05:06:16 +0000 (0:00:01.947) 0:00:01.947 ****** 2026-02-17 05:06:51.749833 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:06:51.749844 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:06:51.749854 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:06:51.749864 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:06:51.749873 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:06:51.749883 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:06:51.749893 | orchestrator | 2026-02-17 05:06:51.749903 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-17 05:06:51.749913 | orchestrator | Tuesday 17 February 2026 05:06:20 +0000 (0:00:03.930) 0:00:05.878 ****** 2026-02-17 05:06:51.749923 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.749934 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.749944 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.749953 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.749963 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.749972 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.749982 | orchestrator | 2026-02-17 05:06:51.749992 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-17 05:06:51.750002 | orchestrator | Tuesday 17 February 2026 05:06:22 +0000 (0:00:01.714) 0:00:07.593 ****** 2026-02-17 05:06:51.750012 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.750080 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.750090 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.750100 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.750110 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.750120 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.750130 | orchestrator | 2026-02-17 05:06:51.750140 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-17 05:06:51.750150 | orchestrator | Tuesday 17 February 2026 05:06:23 +0000 (0:00:01.858) 0:00:09.451 ****** 2026-02-17 05:06:51.750160 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:06:51.750172 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:06:51.750185 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:06:51.750196 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:06:51.750207 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:06:51.750218 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:06:51.750230 | orchestrator | 2026-02-17 05:06:51.750242 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-17 05:06:51.750253 | orchestrator | Tuesday 17 February 2026 05:06:26 +0000 (0:00:02.556) 0:00:12.007 ****** 2026-02-17 05:06:51.750264 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:06:51.750276 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:06:51.750287 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:06:51.750346 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:06:51.750358 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:06:51.750370 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:06:51.750381 | orchestrator | 2026-02-17 05:06:51.750392 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-17 05:06:51.750404 | orchestrator | Tuesday 17 February 2026 05:06:28 +0000 (0:00:02.298) 0:00:14.306 ****** 2026-02-17 05:06:51.750416 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:06:51.750427 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:06:51.750438 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:06:51.750449 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:06:51.750460 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:06:51.750494 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:06:51.750506 | orchestrator | 2026-02-17 05:06:51.750518 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-17 05:06:51.750529 | orchestrator | Tuesday 17 February 2026 05:06:30 +0000 (0:00:02.201) 0:00:16.508 ****** 2026-02-17 05:06:51.750540 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.750552 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.750563 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.750573 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.750583 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.750593 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.750602 | orchestrator | 2026-02-17 05:06:51.750612 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-17 05:06:51.750622 | orchestrator | Tuesday 17 February 2026 05:06:33 +0000 (0:00:02.064) 0:00:18.573 ****** 2026-02-17 05:06:51.750632 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.750642 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.750651 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.750661 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.750671 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.750682 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.750700 | orchestrator | 2026-02-17 05:06:51.750717 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-17 05:06:51.750735 | orchestrator | Tuesday 17 February 2026 05:06:34 +0000 (0:00:01.753) 0:00:20.326 ****** 2026-02-17 05:06:51.750752 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 05:06:51.750769 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 05:06:51.750784 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.750794 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 05:06:51.750814 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 05:06:51.750824 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.750834 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 05:06:51.750844 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 05:06:51.750853 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.750863 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 05:06:51.750873 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 05:06:51.750882 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.750910 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 05:06:51.750921 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 05:06:51.750931 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.750940 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-17 05:06:51.750950 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-17 05:06:51.750960 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.750969 | orchestrator | 2026-02-17 05:06:51.750979 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-17 05:06:51.750989 | orchestrator | Tuesday 17 February 2026 05:06:36 +0000 (0:00:02.059) 0:00:22.386 ****** 2026-02-17 05:06:51.750998 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.751008 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.751018 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.751027 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.751037 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.751047 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.751056 | orchestrator | 2026-02-17 05:06:51.751074 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-17 05:06:51.751085 | orchestrator | Tuesday 17 February 2026 05:06:38 +0000 (0:00:02.030) 0:00:24.417 ****** 2026-02-17 05:06:51.751095 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:06:51.751105 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:06:51.751114 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:06:51.751124 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:06:51.751133 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:06:51.751143 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:06:51.751153 | orchestrator | 2026-02-17 05:06:51.751163 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-17 05:06:51.751172 | orchestrator | Tuesday 17 February 2026 05:06:40 +0000 (0:00:01.976) 0:00:26.393 ****** 2026-02-17 05:06:51.751182 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:06:51.751192 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:06:51.751201 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:06:51.751211 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:06:51.751225 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:06:51.751235 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:06:51.751244 | orchestrator | 2026-02-17 05:06:51.751254 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-17 05:06:51.751264 | orchestrator | Tuesday 17 February 2026 05:06:43 +0000 (0:00:02.655) 0:00:29.049 ****** 2026-02-17 05:06:51.751274 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.751283 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.751293 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.751326 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.751336 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.751346 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.751355 | orchestrator | 2026-02-17 05:06:51.751365 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-17 05:06:51.751375 | orchestrator | Tuesday 17 February 2026 05:06:45 +0000 (0:00:02.001) 0:00:31.051 ****** 2026-02-17 05:06:51.751385 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.751395 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.751404 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.751414 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.751424 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.751433 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.751443 | orchestrator | 2026-02-17 05:06:51.751453 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-17 05:06:51.751464 | orchestrator | Tuesday 17 February 2026 05:06:47 +0000 (0:00:02.080) 0:00:33.132 ****** 2026-02-17 05:06:51.751474 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.751484 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.751494 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.751504 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.751514 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.751524 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.751533 | orchestrator | 2026-02-17 05:06:51.751547 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-17 05:06:51.751557 | orchestrator | Tuesday 17 February 2026 05:06:49 +0000 (0:00:01.751) 0:00:34.883 ****** 2026-02-17 05:06:51.751567 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-17 05:06:51.751577 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-17 05:06:51.751587 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.751597 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-17 05:06:51.751606 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-17 05:06:51.751616 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.751626 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-17 05:06:51.751635 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-17 05:06:51.751651 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:06:51.751661 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-17 05:06:51.751671 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-17 05:06:51.751680 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:06:51.751690 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-17 05:06:51.751700 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-17 05:06:51.751709 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:06:51.751724 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-17 05:06:51.751740 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-17 05:06:51.751756 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:06:51.751773 | orchestrator | 2026-02-17 05:06:51.751791 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-17 05:06:51.751808 | orchestrator | Tuesday 17 February 2026 05:06:51 +0000 (0:00:01.923) 0:00:36.807 ****** 2026-02-17 05:06:51.751824 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:06:51.751837 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:06:51.751854 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:08:42.914183 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:08:42.914318 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.914342 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.914360 | orchestrator | 2026-02-17 05:08:42.914452 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-17 05:08:42.914472 | orchestrator | Tuesday 17 February 2026 05:06:53 +0000 (0:00:01.811) 0:00:38.619 ****** 2026-02-17 05:08:42.914489 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:08:42.914505 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:08:42.914521 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:08:42.914538 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:08:42.914554 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.914571 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.914588 | orchestrator | 2026-02-17 05:08:42.914604 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-17 05:08:42.914621 | orchestrator | 2026-02-17 05:08:42.914637 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-17 05:08:42.914655 | orchestrator | Tuesday 17 February 2026 05:06:55 +0000 (0:00:02.735) 0:00:41.355 ****** 2026-02-17 05:08:42.914672 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.914691 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.914708 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.914724 | orchestrator | 2026-02-17 05:08:42.914741 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-17 05:08:42.914759 | orchestrator | Tuesday 17 February 2026 05:06:57 +0000 (0:00:01.701) 0:00:43.056 ****** 2026-02-17 05:08:42.914777 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.914793 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.914809 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.914825 | orchestrator | 2026-02-17 05:08:42.914843 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-17 05:08:42.914860 | orchestrator | Tuesday 17 February 2026 05:06:59 +0000 (0:00:02.064) 0:00:45.120 ****** 2026-02-17 05:08:42.914877 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:08:42.914894 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:08:42.914911 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:08:42.914928 | orchestrator | 2026-02-17 05:08:42.914965 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-17 05:08:42.914985 | orchestrator | Tuesday 17 February 2026 05:07:01 +0000 (0:00:02.116) 0:00:47.237 ****** 2026-02-17 05:08:42.915002 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.915019 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.915035 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.915052 | orchestrator | 2026-02-17 05:08:42.915092 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-17 05:08:42.915108 | orchestrator | Tuesday 17 February 2026 05:07:03 +0000 (0:00:02.039) 0:00:49.277 ****** 2026-02-17 05:08:42.915125 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:08:42.915141 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.915158 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.915174 | orchestrator | 2026-02-17 05:08:42.915191 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-17 05:08:42.915207 | orchestrator | Tuesday 17 February 2026 05:07:05 +0000 (0:00:01.388) 0:00:50.666 ****** 2026-02-17 05:08:42.915222 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.915238 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.915255 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.915271 | orchestrator | 2026-02-17 05:08:42.915288 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-17 05:08:42.915304 | orchestrator | Tuesday 17 February 2026 05:07:06 +0000 (0:00:01.695) 0:00:52.361 ****** 2026-02-17 05:08:42.915320 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.915336 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.915352 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.915387 | orchestrator | 2026-02-17 05:08:42.915404 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-17 05:08:42.915420 | orchestrator | Tuesday 17 February 2026 05:07:09 +0000 (0:00:02.280) 0:00:54.642 ****** 2026-02-17 05:08:42.915437 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:08:42.915453 | orchestrator | 2026-02-17 05:08:42.915469 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-17 05:08:42.915486 | orchestrator | Tuesday 17 February 2026 05:07:11 +0000 (0:00:01.919) 0:00:56.561 ****** 2026-02-17 05:08:42.915502 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.915518 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.915534 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.915550 | orchestrator | 2026-02-17 05:08:42.915566 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-17 05:08:42.915583 | orchestrator | Tuesday 17 February 2026 05:07:13 +0000 (0:00:02.413) 0:00:58.974 ****** 2026-02-17 05:08:42.915600 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.915615 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.915631 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.915648 | orchestrator | 2026-02-17 05:08:42.915664 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-17 05:08:42.915681 | orchestrator | Tuesday 17 February 2026 05:07:15 +0000 (0:00:01.667) 0:01:00.642 ****** 2026-02-17 05:08:42.915696 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.915712 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.915728 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:08:42.915744 | orchestrator | 2026-02-17 05:08:42.915760 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-17 05:08:42.915777 | orchestrator | Tuesday 17 February 2026 05:07:16 +0000 (0:00:01.802) 0:01:02.444 ****** 2026-02-17 05:08:42.915794 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.915809 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.915825 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:08:42.915841 | orchestrator | 2026-02-17 05:08:42.915857 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-17 05:08:42.915874 | orchestrator | Tuesday 17 February 2026 05:07:19 +0000 (0:00:02.443) 0:01:04.887 ****** 2026-02-17 05:08:42.915891 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:08:42.915907 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.915944 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.915960 | orchestrator | 2026-02-17 05:08:42.915977 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-17 05:08:42.915994 | orchestrator | Tuesday 17 February 2026 05:07:20 +0000 (0:00:01.365) 0:01:06.253 ****** 2026-02-17 05:08:42.916021 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:08:42.916038 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.916054 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.916070 | orchestrator | 2026-02-17 05:08:42.916086 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-17 05:08:42.916103 | orchestrator | Tuesday 17 February 2026 05:07:22 +0000 (0:00:01.590) 0:01:07.844 ****** 2026-02-17 05:08:42.916119 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:08:42.916136 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:08:42.916151 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:08:42.916168 | orchestrator | 2026-02-17 05:08:42.916185 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-17 05:08:42.916201 | orchestrator | Tuesday 17 February 2026 05:07:24 +0000 (0:00:02.206) 0:01:10.051 ****** 2026-02-17 05:08:42.916217 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.916232 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.916247 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.916263 | orchestrator | 2026-02-17 05:08:42.916279 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-17 05:08:42.916295 | orchestrator | Tuesday 17 February 2026 05:07:26 +0000 (0:00:01.929) 0:01:11.981 ****** 2026-02-17 05:08:42.916311 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.916328 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.916344 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.916359 | orchestrator | 2026-02-17 05:08:42.916401 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-17 05:08:42.916419 | orchestrator | Tuesday 17 February 2026 05:07:27 +0000 (0:00:01.460) 0:01:13.442 ****** 2026-02-17 05:08:42.916435 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-17 05:08:42.916453 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-17 05:08:42.916469 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-17 05:08:42.916486 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-17 05:08:42.916502 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-17 05:08:42.916515 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-17 05:08:42.916525 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.916534 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.916544 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.916554 | orchestrator | 2026-02-17 05:08:42.916564 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-17 05:08:42.916574 | orchestrator | Tuesday 17 February 2026 05:07:51 +0000 (0:00:23.506) 0:01:36.948 ****** 2026-02-17 05:08:42.916584 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:08:42.916594 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:08:42.916603 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:08:42.916613 | orchestrator | 2026-02-17 05:08:42.916622 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-17 05:08:42.916632 | orchestrator | Tuesday 17 February 2026 05:07:52 +0000 (0:00:01.415) 0:01:38.364 ****** 2026-02-17 05:08:42.916642 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:08:42.916652 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:08:42.916661 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:08:42.916671 | orchestrator | 2026-02-17 05:08:42.916681 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-17 05:08:42.916700 | orchestrator | Tuesday 17 February 2026 05:07:54 +0000 (0:00:02.123) 0:01:40.487 ****** 2026-02-17 05:08:42.916710 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.916720 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.916730 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.916739 | orchestrator | 2026-02-17 05:08:42.916749 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-17 05:08:42.916759 | orchestrator | Tuesday 17 February 2026 05:07:57 +0000 (0:00:02.267) 0:01:42.754 ****** 2026-02-17 05:08:42.916769 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:08:42.916778 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:08:42.916788 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:08:42.916798 | orchestrator | 2026-02-17 05:08:42.916807 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-17 05:08:42.916817 | orchestrator | Tuesday 17 February 2026 05:08:37 +0000 (0:00:40.181) 0:02:22.936 ****** 2026-02-17 05:08:42.916827 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.916837 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.916846 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.916856 | orchestrator | 2026-02-17 05:08:42.916865 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-17 05:08:42.916875 | orchestrator | Tuesday 17 February 2026 05:08:39 +0000 (0:00:01.797) 0:02:24.733 ****** 2026-02-17 05:08:42.916885 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:08:42.916894 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:08:42.916904 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:08:42.916913 | orchestrator | 2026-02-17 05:08:42.916923 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-17 05:08:42.916933 | orchestrator | Tuesday 17 February 2026 05:08:40 +0000 (0:00:01.730) 0:02:26.463 ****** 2026-02-17 05:08:42.916942 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:08:42.916952 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:08:42.916962 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:08:42.916971 | orchestrator | 2026-02-17 05:08:42.916991 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-17 05:09:30.986969 | orchestrator | Tuesday 17 February 2026 05:08:42 +0000 (0:00:01.940) 0:02:28.404 ****** 2026-02-17 05:09:30.987072 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:09:30.987090 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:09:30.987102 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:09:30.987113 | orchestrator | 2026-02-17 05:09:30.987126 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-17 05:09:30.987137 | orchestrator | Tuesday 17 February 2026 05:08:44 +0000 (0:00:01.658) 0:02:30.062 ****** 2026-02-17 05:09:30.987149 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:09:30.987160 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:09:30.987171 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:09:30.987181 | orchestrator | 2026-02-17 05:09:30.987193 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-17 05:09:30.987204 | orchestrator | Tuesday 17 February 2026 05:08:45 +0000 (0:00:01.380) 0:02:31.443 ****** 2026-02-17 05:09:30.987215 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:09:30.987228 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:09:30.987239 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:09:30.987250 | orchestrator | 2026-02-17 05:09:30.987261 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-17 05:09:30.987272 | orchestrator | Tuesday 17 February 2026 05:08:47 +0000 (0:00:01.713) 0:02:33.156 ****** 2026-02-17 05:09:30.987284 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:09:30.987295 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:09:30.987306 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:09:30.987316 | orchestrator | 2026-02-17 05:09:30.987327 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-17 05:09:30.987338 | orchestrator | Tuesday 17 February 2026 05:08:49 +0000 (0:00:02.010) 0:02:35.167 ****** 2026-02-17 05:09:30.987349 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:09:30.987384 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:09:30.987449 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:09:30.987465 | orchestrator | 2026-02-17 05:09:30.987484 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-17 05:09:30.987513 | orchestrator | Tuesday 17 February 2026 05:08:51 +0000 (0:00:01.805) 0:02:36.972 ****** 2026-02-17 05:09:30.987533 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:09:30.987550 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:09:30.987567 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:09:30.987584 | orchestrator | 2026-02-17 05:09:30.987602 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-17 05:09:30.987618 | orchestrator | Tuesday 17 February 2026 05:08:53 +0000 (0:00:01.967) 0:02:38.940 ****** 2026-02-17 05:09:30.987634 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:09:30.987650 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:09:30.987668 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:09:30.987686 | orchestrator | 2026-02-17 05:09:30.987703 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-17 05:09:30.987720 | orchestrator | Tuesday 17 February 2026 05:08:54 +0000 (0:00:01.356) 0:02:40.296 ****** 2026-02-17 05:09:30.987739 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:09:30.987759 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:09:30.987777 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:09:30.987827 | orchestrator | 2026-02-17 05:09:30.987879 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-17 05:09:30.987892 | orchestrator | Tuesday 17 February 2026 05:08:56 +0000 (0:00:01.435) 0:02:41.732 ****** 2026-02-17 05:09:30.987903 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:09:30.987914 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:09:30.987925 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:09:30.987936 | orchestrator | 2026-02-17 05:09:30.987947 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-17 05:09:30.987958 | orchestrator | Tuesday 17 February 2026 05:08:57 +0000 (0:00:01.662) 0:02:43.394 ****** 2026-02-17 05:09:30.987969 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:09:30.987980 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:09:30.987990 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:09:30.988001 | orchestrator | 2026-02-17 05:09:30.988013 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-17 05:09:30.988026 | orchestrator | Tuesday 17 February 2026 05:08:59 +0000 (0:00:01.733) 0:02:45.128 ****** 2026-02-17 05:09:30.988037 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-17 05:09:30.988048 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-17 05:09:30.988059 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-17 05:09:30.988070 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-17 05:09:30.988081 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-17 05:09:30.988092 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-17 05:09:30.988104 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-17 05:09:30.988115 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-17 05:09:30.988126 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-17 05:09:30.988137 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-17 05:09:30.988148 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-17 05:09:30.988173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-17 05:09:30.988204 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-17 05:09:30.988215 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-17 05:09:30.988226 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-17 05:09:30.988237 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-17 05:09:30.988248 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-17 05:09:30.988259 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-17 05:09:30.988269 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-17 05:09:30.988280 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-17 05:09:30.988291 | orchestrator | 2026-02-17 05:09:30.988302 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-17 05:09:30.988313 | orchestrator | 2026-02-17 05:09:30.988324 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-17 05:09:30.988335 | orchestrator | Tuesday 17 February 2026 05:09:03 +0000 (0:00:04.376) 0:02:49.504 ****** 2026-02-17 05:09:30.988346 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:09:30.988356 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:09:30.988367 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:09:30.988378 | orchestrator | 2026-02-17 05:09:30.988429 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-17 05:09:30.988443 | orchestrator | Tuesday 17 February 2026 05:09:05 +0000 (0:00:01.383) 0:02:50.888 ****** 2026-02-17 05:09:30.988454 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:09:30.988464 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:09:30.988475 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:09:30.988486 | orchestrator | 2026-02-17 05:09:30.988497 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-17 05:09:30.988508 | orchestrator | Tuesday 17 February 2026 05:09:07 +0000 (0:00:01.649) 0:02:52.538 ****** 2026-02-17 05:09:30.988519 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:09:30.988529 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:09:30.988540 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:09:30.988551 | orchestrator | 2026-02-17 05:09:30.988563 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-17 05:09:30.988581 | orchestrator | Tuesday 17 February 2026 05:09:08 +0000 (0:00:01.579) 0:02:54.117 ****** 2026-02-17 05:09:30.988593 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:09:30.988604 | orchestrator | 2026-02-17 05:09:30.988615 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-17 05:09:30.988626 | orchestrator | Tuesday 17 February 2026 05:09:10 +0000 (0:00:01.609) 0:02:55.727 ****** 2026-02-17 05:09:30.988636 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:09:30.988647 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:09:30.988658 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:09:30.988669 | orchestrator | 2026-02-17 05:09:30.988680 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-17 05:09:30.988691 | orchestrator | Tuesday 17 February 2026 05:09:11 +0000 (0:00:01.351) 0:02:57.078 ****** 2026-02-17 05:09:30.988702 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:09:30.988713 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:09:30.988724 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:09:30.988734 | orchestrator | 2026-02-17 05:09:30.988745 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-17 05:09:30.988756 | orchestrator | Tuesday 17 February 2026 05:09:13 +0000 (0:00:01.595) 0:02:58.674 ****** 2026-02-17 05:09:30.988776 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:09:30.988786 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:09:30.988797 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:09:30.988808 | orchestrator | 2026-02-17 05:09:30.988819 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-17 05:09:30.988830 | orchestrator | Tuesday 17 February 2026 05:09:14 +0000 (0:00:01.374) 0:03:00.048 ****** 2026-02-17 05:09:30.988841 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:09:30.988852 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:09:30.988863 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:09:30.988874 | orchestrator | 2026-02-17 05:09:30.988884 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-17 05:09:30.988895 | orchestrator | Tuesday 17 February 2026 05:09:16 +0000 (0:00:01.708) 0:03:01.757 ****** 2026-02-17 05:09:30.988906 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:09:30.988917 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:09:30.988928 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:09:30.988939 | orchestrator | 2026-02-17 05:09:30.988950 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-17 05:09:30.988960 | orchestrator | Tuesday 17 February 2026 05:09:18 +0000 (0:00:02.175) 0:03:03.932 ****** 2026-02-17 05:09:30.988971 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:09:30.988982 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:09:30.988993 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:09:30.989003 | orchestrator | 2026-02-17 05:09:30.989014 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-17 05:09:30.989025 | orchestrator | Tuesday 17 February 2026 05:09:20 +0000 (0:00:02.393) 0:03:06.326 ****** 2026-02-17 05:09:30.989045 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:09:30.989056 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:09:30.989067 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:09:30.989078 | orchestrator | 2026-02-17 05:09:30.989089 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-17 05:09:30.989100 | orchestrator | 2026-02-17 05:09:30.989111 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-17 05:09:30.989122 | orchestrator | Tuesday 17 February 2026 05:09:28 +0000 (0:00:07.903) 0:03:14.229 ****** 2026-02-17 05:09:30.989133 | orchestrator | ok: [testbed-manager] 2026-02-17 05:09:30.989143 | orchestrator | 2026-02-17 05:09:30.989154 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-17 05:09:30.989172 | orchestrator | Tuesday 17 February 2026 05:09:30 +0000 (0:00:02.254) 0:03:16.484 ****** 2026-02-17 05:10:41.295082 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295193 | orchestrator | 2026-02-17 05:10:41.295208 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-17 05:10:41.295220 | orchestrator | Tuesday 17 February 2026 05:09:32 +0000 (0:00:01.459) 0:03:17.943 ****** 2026-02-17 05:10:41.295231 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-17 05:10:41.295241 | orchestrator | 2026-02-17 05:10:41.295251 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-17 05:10:41.295261 | orchestrator | Tuesday 17 February 2026 05:09:34 +0000 (0:00:01.675) 0:03:19.619 ****** 2026-02-17 05:10:41.295271 | orchestrator | changed: [testbed-manager] 2026-02-17 05:10:41.295281 | orchestrator | 2026-02-17 05:10:41.295291 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-17 05:10:41.295301 | orchestrator | Tuesday 17 February 2026 05:09:36 +0000 (0:00:01.999) 0:03:21.618 ****** 2026-02-17 05:10:41.295311 | orchestrator | changed: [testbed-manager] 2026-02-17 05:10:41.295320 | orchestrator | 2026-02-17 05:10:41.295331 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-17 05:10:41.295340 | orchestrator | Tuesday 17 February 2026 05:09:37 +0000 (0:00:01.623) 0:03:23.242 ****** 2026-02-17 05:10:41.295351 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-17 05:10:41.295381 | orchestrator | 2026-02-17 05:10:41.295444 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-17 05:10:41.295456 | orchestrator | Tuesday 17 February 2026 05:09:40 +0000 (0:00:02.964) 0:03:26.207 ****** 2026-02-17 05:10:41.295465 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-17 05:10:41.295475 | orchestrator | 2026-02-17 05:10:41.295485 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-17 05:10:41.295495 | orchestrator | Tuesday 17 February 2026 05:09:42 +0000 (0:00:01.791) 0:03:27.998 ****** 2026-02-17 05:10:41.295519 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295529 | orchestrator | 2026-02-17 05:10:41.295539 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-17 05:10:41.295549 | orchestrator | Tuesday 17 February 2026 05:09:43 +0000 (0:00:01.388) 0:03:29.386 ****** 2026-02-17 05:10:41.295559 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295569 | orchestrator | 2026-02-17 05:10:41.295578 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-17 05:10:41.295589 | orchestrator | 2026-02-17 05:10:41.295599 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-17 05:10:41.295609 | orchestrator | Tuesday 17 February 2026 05:09:45 +0000 (0:00:01.792) 0:03:31.179 ****** 2026-02-17 05:10:41.295618 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295629 | orchestrator | 2026-02-17 05:10:41.295641 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-17 05:10:41.295652 | orchestrator | Tuesday 17 February 2026 05:09:46 +0000 (0:00:01.104) 0:03:32.284 ****** 2026-02-17 05:10:41.295663 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 05:10:41.295675 | orchestrator | 2026-02-17 05:10:41.295687 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-17 05:10:41.295697 | orchestrator | Tuesday 17 February 2026 05:09:48 +0000 (0:00:01.478) 0:03:33.763 ****** 2026-02-17 05:10:41.295708 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295719 | orchestrator | 2026-02-17 05:10:41.295730 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-17 05:10:41.295741 | orchestrator | Tuesday 17 February 2026 05:09:50 +0000 (0:00:01.857) 0:03:35.620 ****** 2026-02-17 05:10:41.295752 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295763 | orchestrator | 2026-02-17 05:10:41.295774 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-17 05:10:41.295785 | orchestrator | Tuesday 17 February 2026 05:09:53 +0000 (0:00:03.233) 0:03:38.854 ****** 2026-02-17 05:10:41.295796 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295807 | orchestrator | 2026-02-17 05:10:41.295818 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-17 05:10:41.295829 | orchestrator | Tuesday 17 February 2026 05:09:54 +0000 (0:00:01.472) 0:03:40.326 ****** 2026-02-17 05:10:41.295840 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295852 | orchestrator | 2026-02-17 05:10:41.295863 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-17 05:10:41.295874 | orchestrator | Tuesday 17 February 2026 05:09:56 +0000 (0:00:01.462) 0:03:41.789 ****** 2026-02-17 05:10:41.295885 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295896 | orchestrator | 2026-02-17 05:10:41.295907 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-17 05:10:41.295918 | orchestrator | Tuesday 17 February 2026 05:09:57 +0000 (0:00:01.672) 0:03:43.462 ****** 2026-02-17 05:10:41.295929 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295940 | orchestrator | 2026-02-17 05:10:41.295949 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-17 05:10:41.295959 | orchestrator | Tuesday 17 February 2026 05:10:00 +0000 (0:00:02.595) 0:03:46.057 ****** 2026-02-17 05:10:41.295969 | orchestrator | ok: [testbed-manager] 2026-02-17 05:10:41.295979 | orchestrator | 2026-02-17 05:10:41.295989 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-17 05:10:41.296006 | orchestrator | 2026-02-17 05:10:41.296016 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-17 05:10:41.296026 | orchestrator | Tuesday 17 February 2026 05:10:02 +0000 (0:00:01.715) 0:03:47.772 ****** 2026-02-17 05:10:41.296036 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:10:41.296045 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:10:41.296055 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:10:41.296065 | orchestrator | 2026-02-17 05:10:41.296075 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-17 05:10:41.296085 | orchestrator | Tuesday 17 February 2026 05:10:03 +0000 (0:00:01.457) 0:03:49.229 ****** 2026-02-17 05:10:41.296095 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:10:41.296105 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:10:41.296114 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:10:41.296124 | orchestrator | 2026-02-17 05:10:41.296150 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-17 05:10:41.296160 | orchestrator | Tuesday 17 February 2026 05:10:05 +0000 (0:00:01.658) 0:03:50.888 ****** 2026-02-17 05:10:41.296170 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:10:41.296180 | orchestrator | 2026-02-17 05:10:41.296190 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-17 05:10:41.296200 | orchestrator | Tuesday 17 February 2026 05:10:07 +0000 (0:00:01.750) 0:03:52.638 ****** 2026-02-17 05:10:41.296209 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296219 | orchestrator | 2026-02-17 05:10:41.296229 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-17 05:10:41.296238 | orchestrator | Tuesday 17 February 2026 05:10:09 +0000 (0:00:01.886) 0:03:54.525 ****** 2026-02-17 05:10:41.296248 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296258 | orchestrator | 2026-02-17 05:10:41.296267 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-17 05:10:41.296277 | orchestrator | Tuesday 17 February 2026 05:10:10 +0000 (0:00:01.868) 0:03:56.394 ****** 2026-02-17 05:10:41.296287 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:10:41.296296 | orchestrator | 2026-02-17 05:10:41.296306 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-17 05:10:41.296316 | orchestrator | Tuesday 17 February 2026 05:10:12 +0000 (0:00:01.171) 0:03:57.565 ****** 2026-02-17 05:10:41.296325 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296335 | orchestrator | 2026-02-17 05:10:41.296344 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-17 05:10:41.296354 | orchestrator | Tuesday 17 February 2026 05:10:14 +0000 (0:00:02.050) 0:03:59.616 ****** 2026-02-17 05:10:41.296364 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296373 | orchestrator | 2026-02-17 05:10:41.296383 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-17 05:10:41.296427 | orchestrator | Tuesday 17 February 2026 05:10:16 +0000 (0:00:02.175) 0:04:01.791 ****** 2026-02-17 05:10:41.296437 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296447 | orchestrator | 2026-02-17 05:10:41.296456 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-17 05:10:41.296466 | orchestrator | Tuesday 17 February 2026 05:10:17 +0000 (0:00:01.196) 0:04:02.987 ****** 2026-02-17 05:10:41.296477 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296486 | orchestrator | 2026-02-17 05:10:41.296499 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-17 05:10:41.296515 | orchestrator | Tuesday 17 February 2026 05:10:18 +0000 (0:00:01.225) 0:04:04.213 ****** 2026-02-17 05:10:41.296532 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-17 05:10:41.296556 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-17 05:10:41.296578 | orchestrator | } 2026-02-17 05:10:41.296594 | orchestrator | 2026-02-17 05:10:41.296618 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-17 05:10:41.296635 | orchestrator | Tuesday 17 February 2026 05:10:19 +0000 (0:00:01.131) 0:04:05.345 ****** 2026-02-17 05:10:41.296650 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:10:41.296665 | orchestrator | 2026-02-17 05:10:41.296680 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-17 05:10:41.296694 | orchestrator | Tuesday 17 February 2026 05:10:21 +0000 (0:00:01.233) 0:04:06.579 ****** 2026-02-17 05:10:41.296707 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-17 05:10:41.296721 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-17 05:10:41.296736 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-17 05:10:41.296749 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-17 05:10:41.296764 | orchestrator | 2026-02-17 05:10:41.296779 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-17 05:10:41.296795 | orchestrator | Tuesday 17 February 2026 05:10:26 +0000 (0:00:05.555) 0:04:12.135 ****** 2026-02-17 05:10:41.296811 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296827 | orchestrator | 2026-02-17 05:10:41.296842 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-17 05:10:41.296858 | orchestrator | Tuesday 17 February 2026 05:10:29 +0000 (0:00:02.440) 0:04:14.575 ****** 2026-02-17 05:10:41.296874 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296890 | orchestrator | 2026-02-17 05:10:41.296907 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-17 05:10:41.296923 | orchestrator | Tuesday 17 February 2026 05:10:31 +0000 (0:00:02.587) 0:04:17.163 ****** 2026-02-17 05:10:41.296939 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-17 05:10:41.296953 | orchestrator | 2026-02-17 05:10:41.296968 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-17 05:10:41.296984 | orchestrator | Tuesday 17 February 2026 05:10:35 +0000 (0:00:04.120) 0:04:21.284 ****** 2026-02-17 05:10:41.297000 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:10:41.297017 | orchestrator | 2026-02-17 05:10:41.297035 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-17 05:10:41.297051 | orchestrator | Tuesday 17 February 2026 05:10:36 +0000 (0:00:01.100) 0:04:22.385 ****** 2026-02-17 05:10:41.297068 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-17 05:10:41.297080 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-17 05:10:41.297090 | orchestrator | 2026-02-17 05:10:41.297100 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-17 05:10:41.297121 | orchestrator | Tuesday 17 February 2026 05:10:39 +0000 (0:00:03.003) 0:04:25.388 ****** 2026-02-17 05:10:41.297132 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:10:41.297153 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:11:07.490709 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:11:07.490823 | orchestrator | 2026-02-17 05:11:07.490841 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-17 05:11:07.490855 | orchestrator | Tuesday 17 February 2026 05:10:41 +0000 (0:00:01.406) 0:04:26.794 ****** 2026-02-17 05:11:07.490866 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:11:07.490878 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:11:07.490889 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:11:07.490900 | orchestrator | 2026-02-17 05:11:07.490912 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-17 05:11:07.490923 | orchestrator | 2026-02-17 05:11:07.490934 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-17 05:11:07.490945 | orchestrator | Tuesday 17 February 2026 05:10:43 +0000 (0:00:02.046) 0:04:28.841 ****** 2026-02-17 05:11:07.490956 | orchestrator | ok: [testbed-manager] 2026-02-17 05:11:07.490992 | orchestrator | 2026-02-17 05:11:07.491004 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-17 05:11:07.491015 | orchestrator | Tuesday 17 February 2026 05:10:44 +0000 (0:00:01.148) 0:04:29.989 ****** 2026-02-17 05:11:07.491026 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-17 05:11:07.491038 | orchestrator | 2026-02-17 05:11:07.491049 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-17 05:11:07.491060 | orchestrator | Tuesday 17 February 2026 05:10:45 +0000 (0:00:01.506) 0:04:31.496 ****** 2026-02-17 05:11:07.491071 | orchestrator | ok: [testbed-manager] 2026-02-17 05:11:07.491082 | orchestrator | 2026-02-17 05:11:07.491093 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-17 05:11:07.491104 | orchestrator | 2026-02-17 05:11:07.491115 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-17 05:11:07.491141 | orchestrator | Tuesday 17 February 2026 05:10:51 +0000 (0:00:05.227) 0:04:36.724 ****** 2026-02-17 05:11:07.491152 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:11:07.491163 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:11:07.491174 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:11:07.491185 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:11:07.491196 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:11:07.491207 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:11:07.491218 | orchestrator | 2026-02-17 05:11:07.491231 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-17 05:11:07.491245 | orchestrator | Tuesday 17 February 2026 05:10:53 +0000 (0:00:01.966) 0:04:38.691 ****** 2026-02-17 05:11:07.491258 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-17 05:11:07.491271 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-17 05:11:07.491283 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-17 05:11:07.491295 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-17 05:11:07.491308 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-17 05:11:07.491320 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-17 05:11:07.491333 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-17 05:11:07.491345 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-17 05:11:07.491358 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-17 05:11:07.491371 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-17 05:11:07.491438 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-17 05:11:07.491452 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-17 05:11:07.491464 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-17 05:11:07.491477 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-17 05:11:07.491490 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-17 05:11:07.491503 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-17 05:11:07.491515 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-17 05:11:07.491527 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-17 05:11:07.491540 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-17 05:11:07.491553 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-17 05:11:07.491575 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-17 05:11:07.491588 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-17 05:11:07.491598 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-17 05:11:07.491609 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-17 05:11:07.491634 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-17 05:11:07.491656 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-17 05:11:07.491686 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-17 05:11:07.491698 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-17 05:11:07.491709 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-17 05:11:07.491720 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-17 05:11:07.491730 | orchestrator | 2026-02-17 05:11:07.491741 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-17 05:11:07.491752 | orchestrator | Tuesday 17 February 2026 05:11:02 +0000 (0:00:09.689) 0:04:48.381 ****** 2026-02-17 05:11:07.491763 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:11:07.491774 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:11:07.491785 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:11:07.491796 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:11:07.491807 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:11:07.491817 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:11:07.491828 | orchestrator | 2026-02-17 05:11:07.491840 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-17 05:11:07.491851 | orchestrator | Tuesday 17 February 2026 05:11:04 +0000 (0:00:02.007) 0:04:50.389 ****** 2026-02-17 05:11:07.491862 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:11:07.491872 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:11:07.491883 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:11:07.491894 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:11:07.491904 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:11:07.491915 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:11:07.491926 | orchestrator | 2026-02-17 05:11:07.491937 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:11:07.491954 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 05:11:07.491967 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-17 05:11:07.491978 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-17 05:11:07.491989 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-17 05:11:07.492000 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 05:11:07.492011 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 05:11:07.492022 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-17 05:11:07.492033 | orchestrator | 2026-02-17 05:11:07.492044 | orchestrator | 2026-02-17 05:11:07.492055 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:11:07.492073 | orchestrator | Tuesday 17 February 2026 05:11:07 +0000 (0:00:02.582) 0:04:52.972 ****** 2026-02-17 05:11:07.492084 | orchestrator | =============================================================================== 2026-02-17 05:11:07.492095 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 40.18s 2026-02-17 05:11:07.492106 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.51s 2026-02-17 05:11:07.492118 | orchestrator | Manage labels ----------------------------------------------------------- 9.69s 2026-02-17 05:11:07.492130 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.90s 2026-02-17 05:11:07.492140 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.56s 2026-02-17 05:11:07.492151 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.23s 2026-02-17 05:11:07.492162 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.38s 2026-02-17 05:11:07.492173 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.12s 2026-02-17 05:11:07.492184 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 3.93s 2026-02-17 05:11:07.492195 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 3.23s 2026-02-17 05:11:07.492206 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.00s 2026-02-17 05:11:07.492217 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.96s 2026-02-17 05:11:07.492228 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.74s 2026-02-17 05:11:07.492240 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.66s 2026-02-17 05:11:07.492250 | orchestrator | kubectl : Install required packages ------------------------------------- 2.59s 2026-02-17 05:11:07.492261 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.59s 2026-02-17 05:11:07.492272 | orchestrator | Manage taints ----------------------------------------------------------- 2.58s 2026-02-17 05:11:07.492283 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.56s 2026-02-17 05:11:07.492300 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.44s 2026-02-17 05:11:07.933246 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.44s 2026-02-17 05:11:08.320525 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-17 05:11:08.320622 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-17 05:11:08.327120 | orchestrator | + set -e 2026-02-17 05:11:08.327169 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 05:11:08.327183 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 05:11:08.327196 | orchestrator | ++ INTERACTIVE=false 2026-02-17 05:11:08.327207 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 05:11:08.327222 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 05:11:08.327241 | orchestrator | + osism apply openstackclient 2026-02-17 05:11:20.582573 | orchestrator | 2026-02-17 05:11:20 | INFO  | Task f03d4bb3-e937-44ff-b488-9e988cb27383 (openstackclient) was prepared for execution. 2026-02-17 05:11:20.582666 | orchestrator | 2026-02-17 05:11:20 | INFO  | It takes a moment until task f03d4bb3-e937-44ff-b488-9e988cb27383 (openstackclient) has been started and output is visible here. 2026-02-17 05:11:55.110749 | orchestrator | 2026-02-17 05:11:55.110856 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-17 05:11:55.110872 | orchestrator | 2026-02-17 05:11:55.110884 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-17 05:11:55.110896 | orchestrator | Tuesday 17 February 2026 05:11:26 +0000 (0:00:01.897) 0:00:01.897 ****** 2026-02-17 05:11:55.110908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-17 05:11:55.110950 | orchestrator | 2026-02-17 05:11:55.110962 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-17 05:11:55.110973 | orchestrator | Tuesday 17 February 2026 05:11:28 +0000 (0:00:01.857) 0:00:03.754 ****** 2026-02-17 05:11:55.110985 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-17 05:11:55.111015 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-17 05:11:55.111027 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-17 05:11:55.111038 | orchestrator | 2026-02-17 05:11:55.111050 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-17 05:11:55.111061 | orchestrator | Tuesday 17 February 2026 05:11:30 +0000 (0:00:02.217) 0:00:05.971 ****** 2026-02-17 05:11:55.111071 | orchestrator | changed: [testbed-manager] 2026-02-17 05:11:55.111083 | orchestrator | 2026-02-17 05:11:55.111093 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-17 05:11:55.111104 | orchestrator | Tuesday 17 February 2026 05:11:33 +0000 (0:00:02.258) 0:00:08.230 ****** 2026-02-17 05:11:55.111115 | orchestrator | ok: [testbed-manager] 2026-02-17 05:11:55.111127 | orchestrator | 2026-02-17 05:11:55.111138 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-17 05:11:55.111149 | orchestrator | Tuesday 17 February 2026 05:11:35 +0000 (0:00:02.164) 0:00:10.395 ****** 2026-02-17 05:11:55.111160 | orchestrator | ok: [testbed-manager] 2026-02-17 05:11:55.111171 | orchestrator | 2026-02-17 05:11:55.111182 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-17 05:11:55.111193 | orchestrator | Tuesday 17 February 2026 05:11:37 +0000 (0:00:01.909) 0:00:12.304 ****** 2026-02-17 05:11:55.111204 | orchestrator | ok: [testbed-manager] 2026-02-17 05:11:55.111215 | orchestrator | 2026-02-17 05:11:55.111226 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-17 05:11:55.111236 | orchestrator | Tuesday 17 February 2026 05:11:38 +0000 (0:00:01.472) 0:00:13.776 ****** 2026-02-17 05:11:55.111248 | orchestrator | changed: [testbed-manager] 2026-02-17 05:11:55.111259 | orchestrator | 2026-02-17 05:11:55.111270 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-17 05:11:55.111281 | orchestrator | Tuesday 17 February 2026 05:11:49 +0000 (0:00:10.519) 0:00:24.296 ****** 2026-02-17 05:11:55.111294 | orchestrator | changed: [testbed-manager] 2026-02-17 05:11:55.111307 | orchestrator | 2026-02-17 05:11:55.111333 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-17 05:11:55.111345 | orchestrator | Tuesday 17 February 2026 05:11:51 +0000 (0:00:01.984) 0:00:26.281 ****** 2026-02-17 05:11:55.111357 | orchestrator | changed: [testbed-manager] 2026-02-17 05:11:55.111370 | orchestrator | 2026-02-17 05:11:55.111382 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-17 05:11:55.111415 | orchestrator | Tuesday 17 February 2026 05:11:52 +0000 (0:00:01.573) 0:00:27.854 ****** 2026-02-17 05:11:55.111427 | orchestrator | ok: [testbed-manager] 2026-02-17 05:11:55.111440 | orchestrator | 2026-02-17 05:11:55.111452 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:11:55.111464 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-17 05:11:55.111477 | orchestrator | 2026-02-17 05:11:55.111489 | orchestrator | 2026-02-17 05:11:55.111502 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:11:55.111514 | orchestrator | Tuesday 17 February 2026 05:11:54 +0000 (0:00:01.904) 0:00:29.759 ****** 2026-02-17 05:11:55.111526 | orchestrator | =============================================================================== 2026-02-17 05:11:55.111539 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.52s 2026-02-17 05:11:55.111551 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.26s 2026-02-17 05:11:55.111563 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.22s 2026-02-17 05:11:55.111594 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.16s 2026-02-17 05:11:55.111607 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.98s 2026-02-17 05:11:55.111619 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.91s 2026-02-17 05:11:55.111632 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.91s 2026-02-17 05:11:55.111644 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.86s 2026-02-17 05:11:55.111655 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.57s 2026-02-17 05:11:55.111666 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.47s 2026-02-17 05:11:55.500203 | orchestrator | + osism apply -a upgrade common 2026-02-17 05:11:57.606980 | orchestrator | 2026-02-17 05:11:57 | INFO  | Task 884b21d3-cf3e-48b3-a248-fedc9aed196a (common) was prepared for execution. 2026-02-17 05:11:57.607083 | orchestrator | 2026-02-17 05:11:57 | INFO  | It takes a moment until task 884b21d3-cf3e-48b3-a248-fedc9aed196a (common) has been started and output is visible here. 2026-02-17 05:12:14.142970 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-17 05:12:14.143091 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-17 05:12:14.143117 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-17 05:12:14.143127 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-17 05:12:14.143148 | orchestrator | 2026-02-17 05:12:14.143159 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-17 05:12:14.143169 | orchestrator | 2026-02-17 05:12:14.143190 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-17 05:12:14.143201 | orchestrator | Tuesday 17 February 2026 05:12:04 +0000 (0:00:02.309) 0:00:02.310 ****** 2026-02-17 05:12:14.143211 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:12:14.143223 | orchestrator | 2026-02-17 05:12:14.143233 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-17 05:12:14.143243 | orchestrator | Tuesday 17 February 2026 05:12:06 +0000 (0:00:02.137) 0:00:04.447 ****** 2026-02-17 05:12:14.143253 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:12:14.143262 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:12:14.143272 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:12:14.143282 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:12:14.143292 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:12:14.143302 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:12:14.143311 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:12:14.143321 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:12:14.143331 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:12:14.143340 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:12:14.143350 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:12:14.143360 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:12:14.143369 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:12:14.143424 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:12:14.143435 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:12:14.143445 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:12:14.143455 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:12:14.143464 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:12:14.143474 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:12:14.143484 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:12:14.143493 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:12:14.143503 | orchestrator | 2026-02-17 05:12:14.143513 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-17 05:12:14.143522 | orchestrator | Tuesday 17 February 2026 05:12:09 +0000 (0:00:02.995) 0:00:07.443 ****** 2026-02-17 05:12:14.143532 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:12:14.143544 | orchestrator | 2026-02-17 05:12:14.143554 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-17 05:12:14.143564 | orchestrator | Tuesday 17 February 2026 05:12:11 +0000 (0:00:02.142) 0:00:09.585 ****** 2026-02-17 05:12:14.143578 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:14.143622 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:14.143634 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:14.143644 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:14.143655 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:14.143672 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:14.143682 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:14.143854 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:14.143876 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.146820 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.146948 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147011 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147053 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147087 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147109 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147129 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147173 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147193 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147213 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147245 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147264 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:16.147284 | orchestrator | 2026-02-17 05:12:16.147305 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-17 05:12:16.147331 | orchestrator | Tuesday 17 February 2026 05:12:15 +0000 (0:00:03.597) 0:00:13.182 ****** 2026-02-17 05:12:16.147355 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:16.147377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:16.147425 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.147461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:16.912298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:16.912498 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:16.912672 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:12:16.912686 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:12:16.912698 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:12:16.912710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912721 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:12:16.912733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:16.912763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912774 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:12:16.912786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:16.912815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:16.912835 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:12:16.912857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.215992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216077 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:12:18.216093 | orchestrator | 2026-02-17 05:12:18.216106 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-17 05:12:18.216118 | orchestrator | Tuesday 17 February 2026 05:12:16 +0000 (0:00:01.790) 0:00:14.973 ****** 2026-02-17 05:12:18.216131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:18.216160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:18.216172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216197 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:18.216240 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:12:18.216268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:18.216281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:18.216316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:18.216380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:26.436603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:26.436744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:26.436764 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:12:26.436778 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:12:26.436790 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:12:26.436801 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:12:26.436818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:26.436833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:26.436844 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:12:26.436856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:26.436892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:26.436904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:26.436916 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:12:26.436927 | orchestrator | 2026-02-17 05:12:26.436939 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-17 05:12:26.436952 | orchestrator | Tuesday 17 February 2026 05:12:19 +0000 (0:00:02.316) 0:00:17.289 ****** 2026-02-17 05:12:26.436963 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:12:26.436973 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:12:26.436984 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:12:26.436995 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:12:26.437023 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:12:26.437035 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:12:26.437046 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:12:26.437057 | orchestrator | 2026-02-17 05:12:26.437068 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-17 05:12:26.437079 | orchestrator | Tuesday 17 February 2026 05:12:20 +0000 (0:00:00.831) 0:00:18.121 ****** 2026-02-17 05:12:26.437090 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:12:26.437101 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:12:26.437111 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:12:26.437124 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:12:26.437137 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:12:26.437149 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:12:26.437161 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:12:26.437173 | orchestrator | 2026-02-17 05:12:26.437185 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-17 05:12:26.437196 | orchestrator | Tuesday 17 February 2026 05:12:20 +0000 (0:00:00.814) 0:00:18.935 ****** 2026-02-17 05:12:26.437207 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:12:26.437218 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:12:26.437228 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:12:26.437239 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:12:26.437250 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:12:26.437260 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:12:26.437271 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:12:26.437282 | orchestrator | 2026-02-17 05:12:26.437292 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-17 05:12:26.437309 | orchestrator | Tuesday 17 February 2026 05:12:21 +0000 (0:00:00.689) 0:00:19.625 ****** 2026-02-17 05:12:26.437320 | orchestrator | changed: [testbed-manager] 2026-02-17 05:12:26.437338 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:12:26.437349 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:12:26.437360 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:12:26.437370 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:12:26.437381 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:12:26.437392 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:12:26.437430 | orchestrator | 2026-02-17 05:12:26.437441 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-17 05:12:26.437452 | orchestrator | Tuesday 17 February 2026 05:12:23 +0000 (0:00:01.763) 0:00:21.389 ****** 2026-02-17 05:12:26.437464 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:26.437477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:26.437489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:26.437500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:26.437521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:27.447006 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:27.447168 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:27.447181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447246 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447280 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:27.447366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:40.671348 | orchestrator | 2026-02-17 05:12:40.671530 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-17 05:12:40.671556 | orchestrator | Tuesday 17 February 2026 05:12:27 +0000 (0:00:04.121) 0:00:25.511 ****** 2026-02-17 05:12:40.671576 | orchestrator | [WARNING]: Skipped 2026-02-17 05:12:40.671596 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-17 05:12:40.671616 | orchestrator | to this access issue: 2026-02-17 05:12:40.671631 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-17 05:12:40.671642 | orchestrator | directory 2026-02-17 05:12:40.671653 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 05:12:40.671666 | orchestrator | 2026-02-17 05:12:40.671677 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-17 05:12:40.671688 | orchestrator | Tuesday 17 February 2026 05:12:28 +0000 (0:00:01.352) 0:00:26.863 ****** 2026-02-17 05:12:40.671715 | orchestrator | [WARNING]: Skipped 2026-02-17 05:12:40.671727 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-17 05:12:40.671738 | orchestrator | to this access issue: 2026-02-17 05:12:40.671749 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-17 05:12:40.671760 | orchestrator | directory 2026-02-17 05:12:40.671771 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 05:12:40.671782 | orchestrator | 2026-02-17 05:12:40.671793 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-17 05:12:40.671804 | orchestrator | Tuesday 17 February 2026 05:12:29 +0000 (0:00:00.871) 0:00:27.735 ****** 2026-02-17 05:12:40.671815 | orchestrator | [WARNING]: Skipped 2026-02-17 05:12:40.671826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-17 05:12:40.671837 | orchestrator | to this access issue: 2026-02-17 05:12:40.671848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-17 05:12:40.671859 | orchestrator | directory 2026-02-17 05:12:40.671870 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 05:12:40.671882 | orchestrator | 2026-02-17 05:12:40.671896 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-17 05:12:40.671908 | orchestrator | Tuesday 17 February 2026 05:12:30 +0000 (0:00:00.892) 0:00:28.627 ****** 2026-02-17 05:12:40.671921 | orchestrator | [WARNING]: Skipped 2026-02-17 05:12:40.671934 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-17 05:12:40.671946 | orchestrator | to this access issue: 2026-02-17 05:12:40.671959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-17 05:12:40.671971 | orchestrator | directory 2026-02-17 05:12:40.671984 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 05:12:40.671996 | orchestrator | 2026-02-17 05:12:40.672009 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-17 05:12:40.672022 | orchestrator | Tuesday 17 February 2026 05:12:31 +0000 (0:00:00.944) 0:00:29.571 ****** 2026-02-17 05:12:40.672034 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:12:40.672047 | orchestrator | changed: [testbed-manager] 2026-02-17 05:12:40.672059 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:12:40.672071 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:12:40.672083 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:12:40.672095 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:12:40.672107 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:12:40.672121 | orchestrator | 2026-02-17 05:12:40.672134 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-17 05:12:40.672147 | orchestrator | Tuesday 17 February 2026 05:12:34 +0000 (0:00:03.035) 0:00:32.607 ****** 2026-02-17 05:12:40.672182 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:12:40.672196 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:12:40.672209 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:12:40.672222 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:12:40.672235 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:12:40.672246 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:12:40.672257 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:12:40.672268 | orchestrator | 2026-02-17 05:12:40.672279 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-17 05:12:40.672290 | orchestrator | Tuesday 17 February 2026 05:12:36 +0000 (0:00:02.235) 0:00:34.842 ****** 2026-02-17 05:12:40.672301 | orchestrator | ok: [testbed-manager] 2026-02-17 05:12:40.672312 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:12:40.672323 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:12:40.672334 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:12:40.672345 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:12:40.672356 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:12:40.672366 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:12:40.672377 | orchestrator | 2026-02-17 05:12:40.672388 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-17 05:12:40.672399 | orchestrator | Tuesday 17 February 2026 05:12:38 +0000 (0:00:01.766) 0:00:36.609 ****** 2026-02-17 05:12:40.672473 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:40.672497 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:40.672511 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:40.672525 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:40.672546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:40.672558 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:40.672569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:40.672588 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:45.179744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:45.179868 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:45.179888 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:45.179924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:45.179936 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:45.179948 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:45.179960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:45.179990 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:45.180009 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:45.180021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:45.180033 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:45.180053 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:45.180064 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:45.180076 | orchestrator | 2026-02-17 05:12:45.180089 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-17 05:12:45.180101 | orchestrator | Tuesday 17 February 2026 05:12:40 +0000 (0:00:02.122) 0:00:38.732 ****** 2026-02-17 05:12:45.180112 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:12:45.180124 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:12:45.180135 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:12:45.180146 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:12:45.180157 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:12:45.180167 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:12:45.180178 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:12:45.180189 | orchestrator | 2026-02-17 05:12:45.180200 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-17 05:12:45.180211 | orchestrator | Tuesday 17 February 2026 05:12:42 +0000 (0:00:02.102) 0:00:40.834 ****** 2026-02-17 05:12:45.180222 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:12:45.180233 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:12:45.180244 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:12:45.180255 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:12:45.180266 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:12:45.180285 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:12:47.773551 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:12:47.773653 | orchestrator | 2026-02-17 05:12:47.773666 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-17 05:12:47.773679 | orchestrator | Tuesday 17 February 2026 05:12:45 +0000 (0:00:02.405) 0:00:43.240 ****** 2026-02-17 05:12:47.773693 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:47.773759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:47.773779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:47.773795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:47.773811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:47.773827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:47.773844 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:47.773884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:12:47.773918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:47.773935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:47.773952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:47.773968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:47.773985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:47.774011 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:50.256755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:50.256924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:50.256944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:50.256955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:50.256965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:50.256981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:50.256991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:12:50.257002 | orchestrator | 2026-02-17 05:12:50.257013 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-17 05:12:50.257024 | orchestrator | Tuesday 17 February 2026 05:12:48 +0000 (0:00:03.555) 0:00:46.795 ****** 2026-02-17 05:12:50.257035 | orchestrator | changed: [testbed-manager] => { 2026-02-17 05:12:50.257046 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:12:50.257056 | orchestrator | } 2026-02-17 05:12:50.257066 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:12:50.257075 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:12:50.257085 | orchestrator | } 2026-02-17 05:12:50.257094 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:12:50.257104 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:12:50.257113 | orchestrator | } 2026-02-17 05:12:50.257123 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:12:50.257139 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:12:50.257149 | orchestrator | } 2026-02-17 05:12:50.257159 | orchestrator | changed: [testbed-node-3] => { 2026-02-17 05:12:50.257168 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:12:50.257177 | orchestrator | } 2026-02-17 05:12:50.257187 | orchestrator | changed: [testbed-node-4] => { 2026-02-17 05:12:50.257196 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:12:50.257206 | orchestrator | } 2026-02-17 05:12:50.257216 | orchestrator | changed: [testbed-node-5] => { 2026-02-17 05:12:50.257225 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:12:50.257235 | orchestrator | } 2026-02-17 05:12:50.257244 | orchestrator | 2026-02-17 05:12:50.257272 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:12:50.257285 | orchestrator | Tuesday 17 February 2026 05:12:49 +0000 (0:00:01.028) 0:00:47.823 ****** 2026-02-17 05:12:50.257302 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:50.257315 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:50.257327 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:50.257339 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:12:50.257350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:50.257362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:50.257374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:50.257401 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:12:50.257438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:50.257465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732372 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:12:52.732385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:52.732396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732476 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:12:52.732487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:52.732512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732528 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-17 05:12:52.732536 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-17 05:12:52.732552 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:12:52.732581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:52.732590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732605 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:12:52.732613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:12:52.732626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:12:52.732641 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:12:52.732649 | orchestrator | 2026-02-17 05:12:52.732657 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:12:52.732664 | orchestrator | Tuesday 17 February 2026 05:12:51 +0000 (0:00:02.148) 0:00:49.972 ****** 2026-02-17 05:12:52.732672 | orchestrator | 2026-02-17 05:12:52.732680 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:12:52.732687 | orchestrator | Tuesday 17 February 2026 05:12:51 +0000 (0:00:00.076) 0:00:50.049 ****** 2026-02-17 05:12:52.732694 | orchestrator | 2026-02-17 05:12:52.732701 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:12:52.732709 | orchestrator | Tuesday 17 February 2026 05:12:52 +0000 (0:00:00.072) 0:00:50.122 ****** 2026-02-17 05:12:52.732716 | orchestrator | 2026-02-17 05:12:52.732723 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:12:52.732731 | orchestrator | Tuesday 17 February 2026 05:12:52 +0000 (0:00:00.071) 0:00:50.193 ****** 2026-02-17 05:12:52.732738 | orchestrator | 2026-02-17 05:12:52.732745 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:12:52.732752 | orchestrator | Tuesday 17 February 2026 05:12:52 +0000 (0:00:00.072) 0:00:50.266 ****** 2026-02-17 05:12:52.732759 | orchestrator | 2026-02-17 05:12:52.732767 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:12:52.732778 | orchestrator | Tuesday 17 February 2026 05:12:52 +0000 (0:00:00.328) 0:00:50.595 ****** 2026-02-17 05:12:54.877690 | orchestrator | 2026-02-17 05:12:54.877811 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:12:54.877832 | orchestrator | Tuesday 17 February 2026 05:12:52 +0000 (0:00:00.074) 0:00:50.670 ****** 2026-02-17 05:12:54.877848 | orchestrator | 2026-02-17 05:12:54.877858 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-17 05:12:54.877867 | orchestrator | Tuesday 17 February 2026 05:12:52 +0000 (0:00:00.105) 0:00:50.775 ****** 2026-02-17 05:12:54.877875 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-17 05:12:54.877885 | orchestrator | (): '452c6ad9-7fa9-99ad-5d8c-00000000000f' 2026-02-17 05:12:54.877906 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_v5jda5ju/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_v5jda5ju/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_v5jda5ju/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-17 05:12:54.877959 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_mtrerwc1/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_mtrerwc1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_mtrerwc1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-17 05:12:54.877978 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_erbw9wyg/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_erbw9wyg/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_erbw9wyg/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-17 05:12:54.878000 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_j3xqti2f/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_j3xqti2f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_j3xqti2f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-17 05:12:56.505681 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_fmir8o0w/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_fmir8o0w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_fmir8o0w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-17 05:12:56.505823 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_stcxcg7h/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_stcxcg7h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_stcxcg7h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-17 05:12:56.505869 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6t_kkc_c/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6t_kkc_c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6t_kkc_c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-17 05:12:56.505891 | orchestrator | 2026-02-17 05:12:56.505951 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:12:56.505965 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-17 05:12:56.505979 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-17 05:12:56.505989 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-17 05:12:56.506079 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-17 05:12:56.506093 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-17 05:12:56.506104 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-17 05:12:56.506115 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-17 05:12:56.506126 | orchestrator | 2026-02-17 05:12:56.506137 | orchestrator | 2026-02-17 05:12:56.506158 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:12:57.011812 | orchestrator | 2026-02-17 05:12:57 | INFO  | Task dcf06881-349e-4fef-83d1-c4a2ce6f57da (common) was prepared for execution. 2026-02-17 05:12:57.011905 | orchestrator | 2026-02-17 05:12:57 | INFO  | It takes a moment until task dcf06881-349e-4fef-83d1-c4a2ce6f57da (common) has been started and output is visible here. 2026-02-17 05:13:15.537176 | orchestrator | Tuesday 17 February 2026 05:12:56 +0000 (0:00:03.792) 0:00:54.568 ****** 2026-02-17 05:13:15.537316 | orchestrator | =============================================================================== 2026-02-17 05:13:15.537344 | orchestrator | common : Copying over config.json files for services -------------------- 4.12s 2026-02-17 05:13:15.537363 | orchestrator | common : Restart fluentd container -------------------------------------- 3.79s 2026-02-17 05:13:15.537375 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.60s 2026-02-17 05:13:15.537408 | orchestrator | service-check-containers : common | Check containers -------------------- 3.56s 2026-02-17 05:13:15.537420 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.04s 2026-02-17 05:13:15.537481 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.00s 2026-02-17 05:13:15.537493 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.41s 2026-02-17 05:13:15.537505 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.32s 2026-02-17 05:13:15.537516 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.24s 2026-02-17 05:13:15.537528 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.15s 2026-02-17 05:13:15.537539 | orchestrator | common : include_tasks -------------------------------------------------- 2.14s 2026-02-17 05:13:15.537550 | orchestrator | common : include_tasks -------------------------------------------------- 2.14s 2026-02-17 05:13:15.537561 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.12s 2026-02-17 05:13:15.537572 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.10s 2026-02-17 05:13:15.537583 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.79s 2026-02-17 05:13:15.537595 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.77s 2026-02-17 05:13:15.537606 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.76s 2026-02-17 05:13:15.537617 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.35s 2026-02-17 05:13:15.537628 | orchestrator | service-check-containers : common | Notify handlers to restart containers --- 1.03s 2026-02-17 05:13:15.537639 | orchestrator | common : Find custom fluentd output config files ------------------------ 0.94s 2026-02-17 05:13:15.537651 | orchestrator | 2026-02-17 05:13:15.537663 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-17 05:13:15.537674 | orchestrator | 2026-02-17 05:13:15.537686 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-17 05:13:15.537720 | orchestrator | Tuesday 17 February 2026 05:13:03 +0000 (0:00:01.885) 0:00:01.885 ****** 2026-02-17 05:13:15.537732 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:13:15.537745 | orchestrator | 2026-02-17 05:13:15.537756 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-17 05:13:15.537773 | orchestrator | Tuesday 17 February 2026 05:13:06 +0000 (0:00:03.580) 0:00:05.466 ****** 2026-02-17 05:13:15.537785 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:13:15.537797 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:13:15.537808 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:13:15.537819 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:13:15.537829 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:13:15.537840 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:13:15.537851 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:13:15.537862 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:13:15.537873 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-17 05:13:15.537885 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:13:15.537896 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:13:15.537907 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:13:15.537918 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:13:15.537929 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:13:15.537940 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:13:15.537951 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-17 05:13:15.537962 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:13:15.537973 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:13:15.537984 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:13:15.537995 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:13:15.538100 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-17 05:13:15.538115 | orchestrator | 2026-02-17 05:13:15.538126 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-17 05:13:15.538137 | orchestrator | Tuesday 17 February 2026 05:13:09 +0000 (0:00:03.236) 0:00:08.703 ****** 2026-02-17 05:13:15.538148 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:13:15.538161 | orchestrator | 2026-02-17 05:13:15.538172 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-17 05:13:15.538183 | orchestrator | Tuesday 17 February 2026 05:13:12 +0000 (0:00:02.983) 0:00:11.686 ****** 2026-02-17 05:13:15.538197 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:15.538222 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:15.538235 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:15.538253 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:15.538265 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:15.538276 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:15.538303 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:18.126744 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.126874 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.126891 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.126918 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.126931 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.126942 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.126972 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.126986 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.127008 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.127020 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.127031 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.127048 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.127059 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.127071 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:18.127083 | orchestrator | 2026-02-17 05:13:18.127095 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-17 05:13:18.127109 | orchestrator | Tuesday 17 February 2026 05:13:17 +0000 (0:00:04.489) 0:00:16.176 ****** 2026-02-17 05:13:18.127121 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:18.127145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:20.331798 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.331939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.331975 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.331991 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:13:20.332004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.332016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:20.332028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:20.332041 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:13:20.332074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.332105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:20.332117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.332129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.332141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.332153 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:13:20.332164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.332176 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:13:20.332187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:20.332206 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:13:20.332217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:20.332237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:23.255608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255627 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:13:23.255646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255671 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:13:23.255682 | orchestrator | 2026-02-17 05:13:23.255694 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-17 05:13:23.255706 | orchestrator | Tuesday 17 February 2026 05:13:20 +0000 (0:00:02.863) 0:00:19.040 ****** 2026-02-17 05:13:23.255718 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:23.255750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:23.255763 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:23.255816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255840 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:13:23.255852 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:23.255888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:23.255905 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:13:23.255929 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:13:23.255981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:35.277318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:35.277435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:35.277481 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:13:35.277512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:35.277539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:35.277583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:35.277596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:35.277608 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:13:35.277620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:13:35.277649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:35.277661 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:13:35.277672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:35.277689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:35.277701 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:13:35.277712 | orchestrator | 2026-02-17 05:13:35.277732 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-17 05:13:35.277745 | orchestrator | Tuesday 17 February 2026 05:13:23 +0000 (0:00:02.940) 0:00:21.980 ****** 2026-02-17 05:13:35.277756 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:13:35.277767 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:13:35.277777 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:13:35.277788 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:13:35.277799 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:13:35.277810 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:13:35.277821 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:13:35.277832 | orchestrator | 2026-02-17 05:13:35.277844 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-17 05:13:35.277855 | orchestrator | Tuesday 17 February 2026 05:13:25 +0000 (0:00:02.036) 0:00:24.017 ****** 2026-02-17 05:13:35.277866 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:13:35.277877 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:13:35.277888 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:13:35.277899 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:13:35.277910 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:13:35.277920 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:13:35.277931 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:13:35.277942 | orchestrator | 2026-02-17 05:13:35.277953 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-17 05:13:35.277964 | orchestrator | Tuesday 17 February 2026 05:13:27 +0000 (0:00:01.942) 0:00:25.960 ****** 2026-02-17 05:13:35.277975 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:13:35.277986 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:13:35.277997 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:13:35.278008 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:13:35.278075 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:13:35.278088 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:13:35.278098 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:13:35.278109 | orchestrator | 2026-02-17 05:13:35.278120 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-17 05:13:35.278131 | orchestrator | Tuesday 17 February 2026 05:13:29 +0000 (0:00:01.984) 0:00:27.944 ****** 2026-02-17 05:13:35.278143 | orchestrator | ok: [testbed-manager] 2026-02-17 05:13:35.278155 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:13:35.278166 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:13:35.278177 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:13:35.278188 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:13:35.278199 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:13:35.278209 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:13:35.278220 | orchestrator | 2026-02-17 05:13:35.278231 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-17 05:13:35.278242 | orchestrator | Tuesday 17 February 2026 05:13:32 +0000 (0:00:02.959) 0:00:30.904 ****** 2026-02-17 05:13:35.278254 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:35.278275 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:38.081643 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:38.081745 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:38.081756 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:38.081765 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081773 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:38.081780 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081788 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081829 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081842 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081850 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081860 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081867 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081875 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:38.081882 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:38.081899 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:57.851174 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:57.851285 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:57.851302 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:57.851316 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:57.851328 | orchestrator | 2026-02-17 05:13:57.851339 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-17 05:13:57.851351 | orchestrator | Tuesday 17 February 2026 05:13:38 +0000 (0:00:05.899) 0:00:36.804 ****** 2026-02-17 05:13:57.851361 | orchestrator | [WARNING]: Skipped 2026-02-17 05:13:57.851373 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-17 05:13:57.851383 | orchestrator | to this access issue: 2026-02-17 05:13:57.851393 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-17 05:13:57.851403 | orchestrator | directory 2026-02-17 05:13:57.851413 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 05:13:57.851424 | orchestrator | 2026-02-17 05:13:57.851435 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-17 05:13:57.851445 | orchestrator | Tuesday 17 February 2026 05:13:40 +0000 (0:00:02.376) 0:00:39.181 ****** 2026-02-17 05:13:57.851455 | orchestrator | [WARNING]: Skipped 2026-02-17 05:13:57.851465 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-17 05:13:57.851535 | orchestrator | to this access issue: 2026-02-17 05:13:57.851547 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-17 05:13:57.851558 | orchestrator | directory 2026-02-17 05:13:57.851568 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 05:13:57.851579 | orchestrator | 2026-02-17 05:13:57.851609 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-17 05:13:57.851620 | orchestrator | Tuesday 17 February 2026 05:13:42 +0000 (0:00:01.871) 0:00:41.053 ****** 2026-02-17 05:13:57.851630 | orchestrator | [WARNING]: Skipped 2026-02-17 05:13:57.851641 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-17 05:13:57.851651 | orchestrator | to this access issue: 2026-02-17 05:13:57.851662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-17 05:13:57.851672 | orchestrator | directory 2026-02-17 05:13:57.851682 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 05:13:57.851693 | orchestrator | 2026-02-17 05:13:57.851703 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-17 05:13:57.851716 | orchestrator | Tuesday 17 February 2026 05:13:44 +0000 (0:00:01.881) 0:00:42.935 ****** 2026-02-17 05:13:57.851728 | orchestrator | [WARNING]: Skipped 2026-02-17 05:13:57.851740 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-17 05:13:57.851752 | orchestrator | to this access issue: 2026-02-17 05:13:57.851764 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-17 05:13:57.851776 | orchestrator | directory 2026-02-17 05:13:57.851788 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-17 05:13:57.851800 | orchestrator | 2026-02-17 05:13:57.851827 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-17 05:13:57.851840 | orchestrator | Tuesday 17 February 2026 05:13:46 +0000 (0:00:01.895) 0:00:44.831 ****** 2026-02-17 05:13:57.851852 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:13:57.851863 | orchestrator | ok: [testbed-manager] 2026-02-17 05:13:57.851875 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:13:57.851905 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:13:57.851916 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:13:57.851927 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:13:57.851939 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:13:57.851953 | orchestrator | 2026-02-17 05:13:57.851971 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-17 05:13:57.851987 | orchestrator | Tuesday 17 February 2026 05:13:50 +0000 (0:00:04.217) 0:00:49.048 ****** 2026-02-17 05:13:57.852005 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:13:57.852024 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:13:57.852037 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:13:57.852048 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:13:57.852060 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:13:57.852077 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:13:57.852087 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-17 05:13:57.852097 | orchestrator | 2026-02-17 05:13:57.852107 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-17 05:13:57.852117 | orchestrator | Tuesday 17 February 2026 05:13:53 +0000 (0:00:03.542) 0:00:52.590 ****** 2026-02-17 05:13:57.852126 | orchestrator | ok: [testbed-manager] 2026-02-17 05:13:57.852136 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:13:57.852146 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:13:57.852155 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:13:57.852165 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:13:57.852175 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:13:57.852184 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:13:57.852194 | orchestrator | 2026-02-17 05:13:57.852203 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-17 05:13:57.852222 | orchestrator | Tuesday 17 February 2026 05:13:56 +0000 (0:00:03.055) 0:00:55.646 ****** 2026-02-17 05:13:57.852234 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:57.852247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:57.852258 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:57.852268 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:57.852287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:58.743799 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:58.743948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:58.744004 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:58.744025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:58.744045 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:58.744063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:58.744080 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:58.744134 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:58.744164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:58.744195 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:58.744216 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:13:58.744236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:13:58.744257 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:58.744278 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:58.744297 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:13:58.744327 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:08.574795 | orchestrator | 2026-02-17 05:14:08.574940 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-17 05:14:08.574960 | orchestrator | Tuesday 17 February 2026 05:13:59 +0000 (0:00:02.921) 0:00:58.568 ****** 2026-02-17 05:14:08.574973 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:14:08.575062 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:14:08.575078 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:14:08.575090 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:14:08.575102 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:14:08.575114 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:14:08.575125 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-17 05:14:08.575136 | orchestrator | 2026-02-17 05:14:08.575148 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-17 05:14:08.575159 | orchestrator | Tuesday 17 February 2026 05:14:02 +0000 (0:00:02.985) 0:01:01.553 ****** 2026-02-17 05:14:08.575171 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:14:08.575182 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:14:08.575194 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:14:08.575205 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:14:08.575216 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:14:08.575227 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:14:08.575238 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-17 05:14:08.575249 | orchestrator | 2026-02-17 05:14:08.575261 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-17 05:14:08.575272 | orchestrator | Tuesday 17 February 2026 05:14:06 +0000 (0:00:03.325) 0:01:04.879 ****** 2026-02-17 05:14:08.575287 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:14:08.575303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:14:08.575316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:14:08.575330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:14:08.575377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:14:08.575392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:14:08.575407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-17 05:14:08.575421 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:08.575436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:08.575450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:08.575464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:08.575524 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:14:13.131761 | orchestrator | 2026-02-17 05:14:13.131773 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-17 05:14:13.131785 | orchestrator | Tuesday 17 February 2026 05:14:10 +0000 (0:00:04.520) 0:01:09.399 ****** 2026-02-17 05:14:13.131797 | orchestrator | changed: [testbed-manager] => { 2026-02-17 05:14:13.131809 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:14:13.131821 | orchestrator | } 2026-02-17 05:14:13.131832 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:14:13.131843 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:14:13.131854 | orchestrator | } 2026-02-17 05:14:13.131864 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:14:13.131875 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:14:13.131886 | orchestrator | } 2026-02-17 05:14:13.131897 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:14:13.131907 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:14:13.131918 | orchestrator | } 2026-02-17 05:14:13.131931 | orchestrator | changed: [testbed-node-3] => { 2026-02-17 05:14:13.131943 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:14:13.131955 | orchestrator | } 2026-02-17 05:14:13.131968 | orchestrator | changed: [testbed-node-4] => { 2026-02-17 05:14:13.131980 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:14:13.131992 | orchestrator | } 2026-02-17 05:14:13.132004 | orchestrator | changed: [testbed-node-5] => { 2026-02-17 05:14:13.132016 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:14:13.132028 | orchestrator | } 2026-02-17 05:14:13.132041 | orchestrator | 2026-02-17 05:14:13.132054 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:14:13.132067 | orchestrator | Tuesday 17 February 2026 05:14:12 +0000 (0:00:02.047) 0:01:11.447 ****** 2026-02-17 05:14:13.132093 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:14:13.132116 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:13.132130 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:13.132143 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:14:13.132156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:14:13.132178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:14:19.712351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712420 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:14:19.712433 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:14:19.712445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:14:19.712457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712486 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:14:19.712602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:14:19.712618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712652 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:14:19.712663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:14:19.712675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:14:19.712698 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:14:19.712709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-17 05:14:19.712736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:15:48.490969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:15:48.491082 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:15:48.491100 | orchestrator | 2026-02-17 05:15:48.491112 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:15:48.491125 | orchestrator | Tuesday 17 February 2026 05:14:15 +0000 (0:00:03.134) 0:01:14.581 ****** 2026-02-17 05:15:48.491136 | orchestrator | 2026-02-17 05:15:48.491147 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:15:48.491182 | orchestrator | Tuesday 17 February 2026 05:14:16 +0000 (0:00:00.439) 0:01:15.021 ****** 2026-02-17 05:15:48.491194 | orchestrator | 2026-02-17 05:15:48.491205 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:15:48.491216 | orchestrator | Tuesday 17 February 2026 05:14:16 +0000 (0:00:00.439) 0:01:15.460 ****** 2026-02-17 05:15:48.491227 | orchestrator | 2026-02-17 05:15:48.491239 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:15:48.491257 | orchestrator | Tuesday 17 February 2026 05:14:17 +0000 (0:00:00.478) 0:01:15.939 ****** 2026-02-17 05:15:48.491279 | orchestrator | 2026-02-17 05:15:48.491307 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:15:48.491326 | orchestrator | Tuesday 17 February 2026 05:14:17 +0000 (0:00:00.480) 0:01:16.420 ****** 2026-02-17 05:15:48.491345 | orchestrator | 2026-02-17 05:15:48.491364 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:15:48.491382 | orchestrator | Tuesday 17 February 2026 05:14:18 +0000 (0:00:00.744) 0:01:17.164 ****** 2026-02-17 05:15:48.491400 | orchestrator | 2026-02-17 05:15:48.491418 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-17 05:15:48.491436 | orchestrator | Tuesday 17 February 2026 05:14:18 +0000 (0:00:00.421) 0:01:17.586 ****** 2026-02-17 05:15:48.491455 | orchestrator | 2026-02-17 05:15:48.491474 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-17 05:15:48.491496 | orchestrator | Tuesday 17 February 2026 05:14:19 +0000 (0:00:00.834) 0:01:18.420 ****** 2026-02-17 05:15:48.491517 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:15:48.491538 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:15:48.491554 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:15:48.491607 | orchestrator | changed: [testbed-manager] 2026-02-17 05:15:48.491622 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:15:48.491635 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:15:48.491647 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:15:48.491660 | orchestrator | 2026-02-17 05:15:48.491672 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-17 05:15:48.491685 | orchestrator | Tuesday 17 February 2026 05:14:55 +0000 (0:00:35.593) 0:01:54.014 ****** 2026-02-17 05:15:48.491699 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:15:48.491712 | orchestrator | changed: [testbed-manager] 2026-02-17 05:15:48.491724 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:15:48.491737 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:15:48.491749 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:15:48.491761 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:15:48.491773 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:15:48.491785 | orchestrator | 2026-02-17 05:15:48.491797 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-17 05:15:48.491809 | orchestrator | Tuesday 17 February 2026 05:15:32 +0000 (0:00:37.570) 0:02:31.584 ****** 2026-02-17 05:15:48.491823 | orchestrator | ok: [testbed-manager] 2026-02-17 05:15:48.491836 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:15:48.491848 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:15:48.491859 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:15:48.491869 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:15:48.491880 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:15:48.491891 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:15:48.491901 | orchestrator | 2026-02-17 05:15:48.491912 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-17 05:15:48.491923 | orchestrator | Tuesday 17 February 2026 05:15:35 +0000 (0:00:03.115) 0:02:34.700 ****** 2026-02-17 05:15:48.491934 | orchestrator | changed: [testbed-manager] 2026-02-17 05:15:48.491945 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:15:48.491956 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:15:48.491966 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:15:48.491978 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:15:48.492004 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:15:48.492021 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:15:48.492038 | orchestrator | 2026-02-17 05:15:48.492066 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:15:48.492087 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:15:48.492125 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:15:48.492146 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:15:48.492168 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:15:48.492209 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:15:48.492221 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:15:48.492232 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:15:48.492243 | orchestrator | 2026-02-17 05:15:48.492254 | orchestrator | 2026-02-17 05:15:48.492265 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:15:48.492276 | orchestrator | Tuesday 17 February 2026 05:15:47 +0000 (0:00:11.937) 0:02:46.638 ****** 2026-02-17 05:15:48.492287 | orchestrator | =============================================================================== 2026-02-17 05:15:48.492298 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.57s 2026-02-17 05:15:48.492311 | orchestrator | common : Restart fluentd container ------------------------------------- 35.59s 2026-02-17 05:15:48.492329 | orchestrator | common : Restart cron container ---------------------------------------- 11.94s 2026-02-17 05:15:48.492357 | orchestrator | common : Copying over config.json files for services -------------------- 5.90s 2026-02-17 05:15:48.492376 | orchestrator | service-check-containers : common | Check containers -------------------- 4.52s 2026-02-17 05:15:48.492394 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.49s 2026-02-17 05:15:48.492410 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.22s 2026-02-17 05:15:48.492425 | orchestrator | common : Flush handlers ------------------------------------------------- 3.84s 2026-02-17 05:15:48.492444 | orchestrator | common : include_tasks -------------------------------------------------- 3.58s 2026-02-17 05:15:48.492461 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.54s 2026-02-17 05:15:48.492481 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.33s 2026-02-17 05:15:48.492499 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.24s 2026-02-17 05:15:48.492517 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.14s 2026-02-17 05:15:48.492531 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.12s 2026-02-17 05:15:48.492542 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.06s 2026-02-17 05:15:48.492552 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.99s 2026-02-17 05:15:48.492563 | orchestrator | common : include_tasks -------------------------------------------------- 2.98s 2026-02-17 05:15:48.492607 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.96s 2026-02-17 05:15:48.492618 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.94s 2026-02-17 05:15:48.492640 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.92s 2026-02-17 05:15:48.811457 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-17 05:15:51.026312 | orchestrator | 2026-02-17 05:15:51 | INFO  | Task 1fc59511-7a23-429f-900d-224a583fa29c (loadbalancer) was prepared for execution. 2026-02-17 05:15:51.026415 | orchestrator | 2026-02-17 05:15:51 | INFO  | It takes a moment until task 1fc59511-7a23-429f-900d-224a583fa29c (loadbalancer) has been started and output is visible here. 2026-02-17 05:16:26.554423 | orchestrator | 2026-02-17 05:16:26.554541 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 05:16:26.554558 | orchestrator | 2026-02-17 05:16:26.554571 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 05:16:26.554583 | orchestrator | Tuesday 17 February 2026 05:15:57 +0000 (0:00:01.956) 0:00:01.956 ****** 2026-02-17 05:16:26.554670 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:16:26.554683 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:16:26.554695 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:16:26.554706 | orchestrator | 2026-02-17 05:16:26.554717 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 05:16:26.554729 | orchestrator | Tuesday 17 February 2026 05:15:59 +0000 (0:00:01.787) 0:00:03.743 ****** 2026-02-17 05:16:26.554740 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-17 05:16:26.554752 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-17 05:16:26.554763 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-17 05:16:26.554774 | orchestrator | 2026-02-17 05:16:26.554785 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-17 05:16:26.554796 | orchestrator | 2026-02-17 05:16:26.554807 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-17 05:16:26.554818 | orchestrator | Tuesday 17 February 2026 05:16:02 +0000 (0:00:03.097) 0:00:06.841 ****** 2026-02-17 05:16:26.554830 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:16:26.554841 | orchestrator | 2026-02-17 05:16:26.554871 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-17 05:16:26.554883 | orchestrator | Tuesday 17 February 2026 05:16:04 +0000 (0:00:02.260) 0:00:09.102 ****** 2026-02-17 05:16:26.554893 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:16:26.554904 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:16:26.554915 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:16:26.554927 | orchestrator | 2026-02-17 05:16:26.554938 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-17 05:16:26.554949 | orchestrator | Tuesday 17 February 2026 05:16:06 +0000 (0:00:01.987) 0:00:11.089 ****** 2026-02-17 05:16:26.554961 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:16:26.554973 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:16:26.554986 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:16:26.554998 | orchestrator | 2026-02-17 05:16:26.555011 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-17 05:16:26.555023 | orchestrator | Tuesday 17 February 2026 05:16:08 +0000 (0:00:02.052) 0:00:13.143 ****** 2026-02-17 05:16:26.555035 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:16:26.555048 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:16:26.555060 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:16:26.555072 | orchestrator | 2026-02-17 05:16:26.555085 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-17 05:16:26.555097 | orchestrator | Tuesday 17 February 2026 05:16:10 +0000 (0:00:01.707) 0:00:14.850 ****** 2026-02-17 05:16:26.555110 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:16:26.555123 | orchestrator | 2026-02-17 05:16:26.555135 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-17 05:16:26.555147 | orchestrator | Tuesday 17 February 2026 05:16:12 +0000 (0:00:01.874) 0:00:16.725 ****** 2026-02-17 05:16:26.555183 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:16:26.555196 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:16:26.555208 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:16:26.555221 | orchestrator | 2026-02-17 05:16:26.555233 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-17 05:16:26.555245 | orchestrator | Tuesday 17 February 2026 05:16:14 +0000 (0:00:01.887) 0:00:18.612 ****** 2026-02-17 05:16:26.555258 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-17 05:16:26.555271 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-17 05:16:26.555283 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-17 05:16:26.555296 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-17 05:16:26.555308 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-17 05:16:26.555322 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-17 05:16:26.555334 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-17 05:16:26.555348 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-17 05:16:26.555359 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-17 05:16:26.555370 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-17 05:16:26.555381 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-17 05:16:26.555392 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-17 05:16:26.555403 | orchestrator | 2026-02-17 05:16:26.555414 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-17 05:16:26.555425 | orchestrator | Tuesday 17 February 2026 05:16:17 +0000 (0:00:03.160) 0:00:21.773 ****** 2026-02-17 05:16:26.555436 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-17 05:16:26.555447 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-17 05:16:26.555458 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-17 05:16:26.555469 | orchestrator | 2026-02-17 05:16:26.555480 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-17 05:16:26.555508 | orchestrator | Tuesday 17 February 2026 05:16:19 +0000 (0:00:02.123) 0:00:23.896 ****** 2026-02-17 05:16:26.555520 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-17 05:16:26.555531 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-17 05:16:26.555542 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-17 05:16:26.555553 | orchestrator | 2026-02-17 05:16:26.555564 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-17 05:16:26.555575 | orchestrator | Tuesday 17 February 2026 05:16:21 +0000 (0:00:02.221) 0:00:26.118 ****** 2026-02-17 05:16:26.555586 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-17 05:16:26.555615 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:16:26.555627 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-17 05:16:26.555638 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:16:26.555648 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-17 05:16:26.555660 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:16:26.555671 | orchestrator | 2026-02-17 05:16:26.555682 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-17 05:16:26.555693 | orchestrator | Tuesday 17 February 2026 05:16:23 +0000 (0:00:01.932) 0:00:28.051 ****** 2026-02-17 05:16:26.555712 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:26.555738 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:26.555750 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:26.555762 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:26.555773 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:26.555793 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:37.646798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:16:37.646956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:16:37.646974 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:16:37.646987 | orchestrator | 2026-02-17 05:16:37.647000 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-17 05:16:37.647013 | orchestrator | Tuesday 17 February 2026 05:16:26 +0000 (0:00:02.836) 0:00:30.888 ****** 2026-02-17 05:16:37.647025 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:16:37.647037 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:16:37.647048 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:16:37.647059 | orchestrator | 2026-02-17 05:16:37.647071 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-17 05:16:37.647083 | orchestrator | Tuesday 17 February 2026 05:16:28 +0000 (0:00:01.969) 0:00:32.858 ****** 2026-02-17 05:16:37.647094 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-17 05:16:37.647106 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-17 05:16:37.647117 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-17 05:16:37.647128 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-17 05:16:37.647139 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-17 05:16:37.647150 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-17 05:16:37.647161 | orchestrator | 2026-02-17 05:16:37.647172 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-17 05:16:37.647183 | orchestrator | Tuesday 17 February 2026 05:16:31 +0000 (0:00:02.790) 0:00:35.649 ****** 2026-02-17 05:16:37.647194 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:16:37.647205 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:16:37.647216 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:16:37.647229 | orchestrator | 2026-02-17 05:16:37.647243 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-17 05:16:37.647255 | orchestrator | Tuesday 17 February 2026 05:16:33 +0000 (0:00:02.313) 0:00:37.962 ****** 2026-02-17 05:16:37.647268 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:16:37.647281 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:16:37.647293 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:16:37.647305 | orchestrator | 2026-02-17 05:16:37.647318 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-17 05:16:37.647330 | orchestrator | Tuesday 17 February 2026 05:16:35 +0000 (0:00:02.354) 0:00:40.317 ****** 2026-02-17 05:16:37.647343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 05:16:37.647386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:16:37.647407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:16:37.647422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 05:16:37.647436 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:16:37.647450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 05:16:37.647463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:16:37.647477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:16:37.647496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 05:16:37.647510 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:16:37.647535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 05:16:41.659438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:16:41.659544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:16:41.659561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 05:16:41.659575 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:16:41.659588 | orchestrator | 2026-02-17 05:16:41.659641 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-17 05:16:41.659656 | orchestrator | Tuesday 17 February 2026 05:16:37 +0000 (0:00:01.662) 0:00:41.979 ****** 2026-02-17 05:16:41.659668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:41.659706 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:41.659719 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:41.659749 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:41.659761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:16:41.659790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 05:16:41.659802 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:41.659821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:16:41.659833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 05:16:41.659859 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:55.365688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:16:55.365812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d', '__omit_place_holder__ddf689484d322b4a0638de135cc95ea865c29d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-17 05:16:55.365829 | orchestrator | 2026-02-17 05:16:55.365843 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-17 05:16:55.365856 | orchestrator | Tuesday 17 February 2026 05:16:41 +0000 (0:00:04.017) 0:00:45.997 ****** 2026-02-17 05:16:55.365868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:55.365905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:55.365918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 05:16:55.365944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:55.365975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:55.365988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:16:55.365999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:16:55.366077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:16:55.366091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:16:55.366103 | orchestrator | 2026-02-17 05:16:55.366114 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-17 05:16:55.366125 | orchestrator | Tuesday 17 February 2026 05:16:46 +0000 (0:00:04.700) 0:00:50.697 ****** 2026-02-17 05:16:55.366137 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-17 05:16:55.366149 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-17 05:16:55.366161 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-17 05:16:55.366172 | orchestrator | 2026-02-17 05:16:55.366183 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-17 05:16:55.366194 | orchestrator | Tuesday 17 February 2026 05:16:49 +0000 (0:00:02.851) 0:00:53.548 ****** 2026-02-17 05:16:55.366205 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-17 05:16:55.366222 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-17 05:16:55.366233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-17 05:16:55.366245 | orchestrator | 2026-02-17 05:16:55.366256 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-17 05:16:55.366266 | orchestrator | Tuesday 17 February 2026 05:16:53 +0000 (0:00:04.266) 0:00:57.815 ****** 2026-02-17 05:16:55.366278 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:16:55.366290 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:16:55.366309 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:15.871730 | orchestrator | 2026-02-17 05:17:15.871847 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-17 05:17:15.871865 | orchestrator | Tuesday 17 February 2026 05:16:55 +0000 (0:00:01.885) 0:00:59.701 ****** 2026-02-17 05:17:15.871877 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-17 05:17:15.871890 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-17 05:17:15.871901 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-17 05:17:15.871937 | orchestrator | 2026-02-17 05:17:15.871950 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-17 05:17:15.871961 | orchestrator | Tuesday 17 February 2026 05:16:58 +0000 (0:00:02.983) 0:01:02.685 ****** 2026-02-17 05:17:15.871972 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-17 05:17:15.871985 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-17 05:17:15.871996 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-17 05:17:15.872007 | orchestrator | 2026-02-17 05:17:15.872018 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-17 05:17:15.872029 | orchestrator | Tuesday 17 February 2026 05:17:01 +0000 (0:00:02.776) 0:01:05.461 ****** 2026-02-17 05:17:15.872040 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:17:15.872051 | orchestrator | 2026-02-17 05:17:15.872062 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-17 05:17:15.872073 | orchestrator | Tuesday 17 February 2026 05:17:03 +0000 (0:00:01.938) 0:01:07.399 ****** 2026-02-17 05:17:15.872085 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-17 05:17:15.872096 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-17 05:17:15.872107 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-17 05:17:15.872118 | orchestrator | 2026-02-17 05:17:15.872129 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-17 05:17:15.872140 | orchestrator | Tuesday 17 February 2026 05:17:05 +0000 (0:00:02.764) 0:01:10.163 ****** 2026-02-17 05:17:15.872151 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-17 05:17:15.872162 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-17 05:17:15.872173 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-17 05:17:15.872184 | orchestrator | 2026-02-17 05:17:15.872211 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-17 05:17:15.872235 | orchestrator | Tuesday 17 February 2026 05:17:08 +0000 (0:00:02.651) 0:01:12.815 ****** 2026-02-17 05:17:15.872248 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:15.872262 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:15.872276 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:15.872288 | orchestrator | 2026-02-17 05:17:15.872301 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-17 05:17:15.872313 | orchestrator | Tuesday 17 February 2026 05:17:09 +0000 (0:00:01.337) 0:01:14.152 ****** 2026-02-17 05:17:15.872326 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:15.872338 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:15.872350 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:15.872363 | orchestrator | 2026-02-17 05:17:15.872375 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-17 05:17:15.872387 | orchestrator | Tuesday 17 February 2026 05:17:11 +0000 (0:00:02.011) 0:01:16.164 ****** 2026-02-17 05:17:15.872417 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 05:17:15.872449 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 05:17:15.872500 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 05:17:15.872513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:17:15.872525 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:17:15.872536 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:17:15.872548 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:17:15.872563 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:17:15.872608 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:17:19.621784 | orchestrator | 2026-02-17 05:17:19.621889 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-17 05:17:19.621907 | orchestrator | Tuesday 17 February 2026 05:17:15 +0000 (0:00:04.041) 0:01:20.206 ****** 2026-02-17 05:17:19.621923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 05:17:19.621939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:19.621952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:19.621964 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:19.621977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 05:17:19.621989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:19.622091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:19.622106 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:19.622137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 05:17:19.622150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:19.622162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:19.622173 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:19.622184 | orchestrator | 2026-02-17 05:17:19.622196 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-17 05:17:19.622207 | orchestrator | Tuesday 17 February 2026 05:17:17 +0000 (0:00:01.596) 0:01:21.802 ****** 2026-02-17 05:17:19.622219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 05:17:19.622231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:19.622255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:19.622268 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:19.622290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 05:17:31.034252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:31.034371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:31.034388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 05:17:31.034402 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:31.034417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:31.034454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:31.034466 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:31.034478 | orchestrator | 2026-02-17 05:17:31.034491 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-17 05:17:31.034519 | orchestrator | Tuesday 17 February 2026 05:17:19 +0000 (0:00:02.153) 0:01:23.955 ****** 2026-02-17 05:17:31.034532 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-17 05:17:31.034545 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-17 05:17:31.034556 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-17 05:17:31.034568 | orchestrator | 2026-02-17 05:17:31.034579 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-17 05:17:31.034590 | orchestrator | Tuesday 17 February 2026 05:17:22 +0000 (0:00:02.447) 0:01:26.403 ****** 2026-02-17 05:17:31.034601 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-17 05:17:31.034612 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-17 05:17:31.034669 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-17 05:17:31.034690 | orchestrator | 2026-02-17 05:17:31.034729 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-17 05:17:31.034741 | orchestrator | Tuesday 17 February 2026 05:17:24 +0000 (0:00:02.489) 0:01:28.893 ****** 2026-02-17 05:17:31.034752 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 05:17:31.034763 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 05:17:31.034774 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 05:17:31.034785 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:31.034797 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-17 05:17:31.034808 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 05:17:31.034819 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:31.034830 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-17 05:17:31.034840 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:31.034852 | orchestrator | 2026-02-17 05:17:31.034863 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-17 05:17:31.034874 | orchestrator | Tuesday 17 February 2026 05:17:27 +0000 (0:00:02.542) 0:01:31.435 ****** 2026-02-17 05:17:31.034886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 05:17:31.034908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 05:17:31.034920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 05:17:31.034938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:17:31.034959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:17:34.800408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:17:34.800525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:17:34.800565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:17:34.800578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:17:34.800590 | orchestrator | 2026-02-17 05:17:34.800604 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-17 05:17:34.800709 | orchestrator | Tuesday 17 February 2026 05:17:31 +0000 (0:00:03.934) 0:01:35.370 ****** 2026-02-17 05:17:34.800735 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:17:34.800754 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:17:34.800772 | orchestrator | } 2026-02-17 05:17:34.800789 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:17:34.800806 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:17:34.800824 | orchestrator | } 2026-02-17 05:17:34.800842 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:17:34.800861 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:17:34.800880 | orchestrator | } 2026-02-17 05:17:34.800892 | orchestrator | 2026-02-17 05:17:34.800904 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:17:34.800917 | orchestrator | Tuesday 17 February 2026 05:17:32 +0000 (0:00:01.395) 0:01:36.766 ****** 2026-02-17 05:17:34.800931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 05:17:34.800966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:34.800998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:34.801021 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:34.801035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 05:17:34.801049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:34.801062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:34.801076 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:34.801094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 05:17:34.801108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:17:34.801130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:17:40.503752 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:40.503854 | orchestrator | 2026-02-17 05:17:40.503869 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-17 05:17:40.503881 | orchestrator | Tuesday 17 February 2026 05:17:34 +0000 (0:00:02.362) 0:01:39.128 ****** 2026-02-17 05:17:40.503892 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:17:40.503902 | orchestrator | 2026-02-17 05:17:40.503912 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-17 05:17:40.503922 | orchestrator | Tuesday 17 February 2026 05:17:36 +0000 (0:00:02.093) 0:01:41.221 ****** 2026-02-17 05:17:40.503936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:17:40.503952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 05:17:40.503964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:40.503991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 05:17:40.504020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:17:40.504054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 05:17:40.504066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:40.504076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:17:40.504092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 05:17:40.504102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 05:17:40.504128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:42.226960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 05:17:42.227066 | orchestrator | 2026-02-17 05:17:42.227081 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-17 05:17:42.227091 | orchestrator | Tuesday 17 February 2026 05:17:41 +0000 (0:00:04.716) 0:01:45.938 ****** 2026-02-17 05:17:42.227103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:17:42.227117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 05:17:42.227142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:42.227170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 05:17:42.227180 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:42.227208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:17:42.227219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 05:17:42.227228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:42.227237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 05:17:42.227246 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:42.227260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:17:42.227276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-17 05:17:42.227290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:57.507535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-17 05:17:57.507963 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:57.508012 | orchestrator | 2026-02-17 05:17:57.508026 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-17 05:17:57.508039 | orchestrator | Tuesday 17 February 2026 05:17:43 +0000 (0:00:01.727) 0:01:47.665 ****** 2026-02-17 05:17:57.508052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:17:57.508067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:17:57.508083 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:57.508096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:17:57.508109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:17:57.508151 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:57.508179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:17:57.508192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:17:57.508205 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:17:57.508218 | orchestrator | 2026-02-17 05:17:57.508231 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-17 05:17:57.508242 | orchestrator | Tuesday 17 February 2026 05:17:45 +0000 (0:00:02.203) 0:01:49.869 ****** 2026-02-17 05:17:57.508253 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:17:57.508264 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:17:57.508275 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:17:57.508286 | orchestrator | 2026-02-17 05:17:57.508297 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-17 05:17:57.508309 | orchestrator | Tuesday 17 February 2026 05:17:47 +0000 (0:00:02.290) 0:01:52.160 ****** 2026-02-17 05:17:57.508320 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:17:57.508331 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:17:57.508341 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:17:57.508352 | orchestrator | 2026-02-17 05:17:57.508363 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-17 05:17:57.508374 | orchestrator | Tuesday 17 February 2026 05:17:50 +0000 (0:00:02.983) 0:01:55.143 ****** 2026-02-17 05:17:57.508385 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:17:57.508396 | orchestrator | 2026-02-17 05:17:57.508407 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-17 05:17:57.508424 | orchestrator | Tuesday 17 February 2026 05:17:52 +0000 (0:00:01.684) 0:01:56.828 ****** 2026-02-17 05:17:57.508465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:17:57.508482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:57.508495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:17:57.508522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:17:57.508535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:57.508547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:17:57.508569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:17:59.283157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:59.283300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:17:59.283319 | orchestrator | 2026-02-17 05:17:59.283333 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-17 05:17:59.283346 | orchestrator | Tuesday 17 February 2026 05:17:57 +0000 (0:00:05.010) 0:02:01.838 ****** 2026-02-17 05:17:59.283360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:17:59.283375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:59.283386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:17:59.283398 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:17:59.283439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:17:59.283488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:59.283502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:17:59.283513 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:17:59.283525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:17:59.283537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-17 05:17:59.283563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:18:16.166745 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:16.166859 | orchestrator | 2026-02-17 05:18:16.166876 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-17 05:18:16.166889 | orchestrator | Tuesday 17 February 2026 05:17:59 +0000 (0:00:01.775) 0:02:03.614 ****** 2026-02-17 05:18:16.166901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:16.166933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:16.166946 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:16.166958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:16.166970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:16.166982 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:16.166993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:16.167004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:16.167015 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:16.167026 | orchestrator | 2026-02-17 05:18:16.167038 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-17 05:18:16.167049 | orchestrator | Tuesday 17 February 2026 05:18:01 +0000 (0:00:01.909) 0:02:05.523 ****** 2026-02-17 05:18:16.167060 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:18:16.167073 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:18:16.167085 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:18:16.167099 | orchestrator | 2026-02-17 05:18:16.167110 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-17 05:18:16.167122 | orchestrator | Tuesday 17 February 2026 05:18:03 +0000 (0:00:02.237) 0:02:07.761 ****** 2026-02-17 05:18:16.167132 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:18:16.167143 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:18:16.167154 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:18:16.167165 | orchestrator | 2026-02-17 05:18:16.167198 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-17 05:18:16.167210 | orchestrator | Tuesday 17 February 2026 05:18:06 +0000 (0:00:03.094) 0:02:10.855 ****** 2026-02-17 05:18:16.167222 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:16.167235 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:16.167248 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:16.167260 | orchestrator | 2026-02-17 05:18:16.167272 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-17 05:18:16.167285 | orchestrator | Tuesday 17 February 2026 05:18:07 +0000 (0:00:01.441) 0:02:12.297 ****** 2026-02-17 05:18:16.167297 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:18:16.167310 | orchestrator | 2026-02-17 05:18:16.167322 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-17 05:18:16.167335 | orchestrator | Tuesday 17 February 2026 05:18:09 +0000 (0:00:01.719) 0:02:14.017 ****** 2026-02-17 05:18:16.167349 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-17 05:18:16.167385 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-17 05:18:16.167399 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-17 05:18:16.167412 | orchestrator | 2026-02-17 05:18:16.167425 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-17 05:18:16.167438 | orchestrator | Tuesday 17 February 2026 05:18:13 +0000 (0:00:03.789) 0:02:17.806 ****** 2026-02-17 05:18:16.167451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-17 05:18:16.167473 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:16.167494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-17 05:18:16.167507 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:16.167527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-17 05:18:28.566373 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:28.566489 | orchestrator | 2026-02-17 05:18:28.566514 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-17 05:18:28.566533 | orchestrator | Tuesday 17 February 2026 05:18:16 +0000 (0:00:02.697) 0:02:20.504 ****** 2026-02-17 05:18:28.566574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 05:18:28.566652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 05:18:28.566674 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:28.566690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 05:18:28.566733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 05:18:28.566751 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:28.566768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 05:18:28.566786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-17 05:18:28.566802 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:28.566820 | orchestrator | 2026-02-17 05:18:28.566839 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-17 05:18:28.566856 | orchestrator | Tuesday 17 February 2026 05:18:19 +0000 (0:00:02.915) 0:02:23.420 ****** 2026-02-17 05:18:28.566873 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:28.566892 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:28.566911 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:28.566929 | orchestrator | 2026-02-17 05:18:28.566944 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-17 05:18:28.566955 | orchestrator | Tuesday 17 February 2026 05:18:20 +0000 (0:00:01.449) 0:02:24.870 ****** 2026-02-17 05:18:28.566966 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:28.566977 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:28.566987 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:28.566998 | orchestrator | 2026-02-17 05:18:28.567009 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-17 05:18:28.567020 | orchestrator | Tuesday 17 February 2026 05:18:22 +0000 (0:00:02.376) 0:02:27.246 ****** 2026-02-17 05:18:28.567031 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:18:28.567042 | orchestrator | 2026-02-17 05:18:28.567052 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-17 05:18:28.567063 | orchestrator | Tuesday 17 February 2026 05:18:24 +0000 (0:00:01.900) 0:02:29.147 ****** 2026-02-17 05:18:28.567107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:18:28.567136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:18:28.567148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 05:18:28.567161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 05:18:28.567173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:18:28.567200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:18:30.669703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.669809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.669826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.669840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.669852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.669896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.669931 | orchestrator | 2026-02-17 05:18:30.669947 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-17 05:18:30.669967 | orchestrator | Tuesday 17 February 2026 05:18:29 +0000 (0:00:04.909) 0:02:34.057 ****** 2026-02-17 05:18:30.669989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:18:30.670011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.670106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.670127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 05:18:30.670162 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:30.670208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:18:42.227445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:18:42.227556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 05:18:42.227573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 05:18:42.227587 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:42.227650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:18:42.227711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:18:42.227744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-17 05:18:42.227756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-17 05:18:42.227768 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:42.227780 | orchestrator | 2026-02-17 05:18:42.227791 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-17 05:18:42.227804 | orchestrator | Tuesday 17 February 2026 05:18:31 +0000 (0:00:02.036) 0:02:36.093 ****** 2026-02-17 05:18:42.227815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:42.227829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:42.227841 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:42.227853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:42.227864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:42.227883 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:42.227895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:42.227906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:18:42.227917 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:42.227928 | orchestrator | 2026-02-17 05:18:42.227939 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-17 05:18:42.227950 | orchestrator | Tuesday 17 February 2026 05:18:33 +0000 (0:00:02.035) 0:02:38.129 ****** 2026-02-17 05:18:42.227961 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:18:42.227973 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:18:42.227985 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:18:42.227997 | orchestrator | 2026-02-17 05:18:42.228022 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-17 05:18:42.228043 | orchestrator | Tuesday 17 February 2026 05:18:36 +0000 (0:00:02.294) 0:02:40.423 ****** 2026-02-17 05:18:42.228060 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:18:42.228079 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:18:42.228096 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:18:42.228115 | orchestrator | 2026-02-17 05:18:42.228132 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-17 05:18:42.228150 | orchestrator | Tuesday 17 February 2026 05:18:38 +0000 (0:00:02.840) 0:02:43.264 ****** 2026-02-17 05:18:42.228168 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:42.228187 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:42.228205 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:42.228224 | orchestrator | 2026-02-17 05:18:42.228241 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-17 05:18:42.228260 | orchestrator | Tuesday 17 February 2026 05:18:40 +0000 (0:00:01.622) 0:02:44.887 ****** 2026-02-17 05:18:42.228278 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:42.228299 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:42.228333 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:18:47.810007 | orchestrator | 2026-02-17 05:18:47.810178 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-17 05:18:47.810196 | orchestrator | Tuesday 17 February 2026 05:18:42 +0000 (0:00:01.673) 0:02:46.561 ****** 2026-02-17 05:18:47.810208 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:18:47.810219 | orchestrator | 2026-02-17 05:18:47.810231 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-17 05:18:47.810243 | orchestrator | Tuesday 17 February 2026 05:18:44 +0000 (0:00:02.050) 0:02:48.612 ****** 2026-02-17 05:18:47.810261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:18:47.810303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 05:18:47.810317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 05:18:47.810343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 05:18:47.810356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 05:18:47.810387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:18:47.810399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 05:18:47.810411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:18:47.810435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:18:47.810452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 05:18:47.810473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.659932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 05:18:49.660086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 05:18:49.660288 | orchestrator | 2026-02-17 05:18:49.660310 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-17 05:18:49.660330 | orchestrator | Tuesday 17 February 2026 05:18:49 +0000 (0:00:04.769) 0:02:53.381 ****** 2026-02-17 05:18:49.660359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:18:49.660395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 05:18:50.921330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.921455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.921472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.921485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.921513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:18:50.921546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.921567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 05:18:50.921580 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:18:50.921645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.921660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.921671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.921683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.922472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 05:18:50.922513 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:18:50.922548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:19:05.983192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-17 05:19:05.983311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-17 05:19:05.983328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-17 05:19:05.983341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-17 05:19:05.983353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:19:05.983403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-17 05:19:05.983418 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:05.983432 | orchestrator | 2026-02-17 05:19:05.983444 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-17 05:19:05.983456 | orchestrator | Tuesday 17 February 2026 05:18:50 +0000 (0:00:01.878) 0:02:55.260 ****** 2026-02-17 05:19:05.983484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:05.983499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:05.983513 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:05.983524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:05.983536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:05.983547 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:05.983558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:05.983569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:05.983580 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:05.983591 | orchestrator | 2026-02-17 05:19:05.983662 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-17 05:19:05.983675 | orchestrator | Tuesday 17 February 2026 05:18:52 +0000 (0:00:02.005) 0:02:57.265 ****** 2026-02-17 05:19:05.983687 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:19:05.983699 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:19:05.983710 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:19:05.983724 | orchestrator | 2026-02-17 05:19:05.983737 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-17 05:19:05.983751 | orchestrator | Tuesday 17 February 2026 05:18:55 +0000 (0:00:02.313) 0:02:59.579 ****** 2026-02-17 05:19:05.983763 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:19:05.983776 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:19:05.983788 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:19:05.983801 | orchestrator | 2026-02-17 05:19:05.983814 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-17 05:19:05.983836 | orchestrator | Tuesday 17 February 2026 05:18:58 +0000 (0:00:02.939) 0:03:02.519 ****** 2026-02-17 05:19:05.983848 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:05.983861 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:05.983873 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:05.983886 | orchestrator | 2026-02-17 05:19:05.983898 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-17 05:19:05.983910 | orchestrator | Tuesday 17 February 2026 05:18:59 +0000 (0:00:01.320) 0:03:03.839 ****** 2026-02-17 05:19:05.983923 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:19:05.983936 | orchestrator | 2026-02-17 05:19:05.983949 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-17 05:19:05.983962 | orchestrator | Tuesday 17 February 2026 05:19:01 +0000 (0:00:01.872) 0:03:05.712 ****** 2026-02-17 05:19:05.983994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 05:19:07.108034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 05:19:07.108212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 05:19:07.108261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 05:19:07.108289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-17 05:19:07.108312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 05:19:10.862002 | orchestrator | 2026-02-17 05:19:10.862171 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-17 05:19:10.862188 | orchestrator | Tuesday 17 February 2026 05:19:07 +0000 (0:00:05.737) 0:03:11.450 ****** 2026-02-17 05:19:10.862219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 05:19:10.862235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 05:19:10.862266 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:10.862298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 05:19:10.862316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 05:19:10.862350 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:10.862371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-17 05:19:29.939981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-17 05:19:29.940101 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:29.940118 | orchestrator | 2026-02-17 05:19:29.940131 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-17 05:19:29.940143 | orchestrator | Tuesday 17 February 2026 05:19:11 +0000 (0:00:04.866) 0:03:16.316 ****** 2026-02-17 05:19:29.940156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 05:19:29.940192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 05:19:29.940204 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:29.940216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 05:19:29.940245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 05:19:29.940264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 05:19:29.940276 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:29.940287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-17 05:19:29.940299 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:29.940310 | orchestrator | 2026-02-17 05:19:29.940322 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-17 05:19:29.940333 | orchestrator | Tuesday 17 February 2026 05:19:16 +0000 (0:00:04.929) 0:03:21.245 ****** 2026-02-17 05:19:29.940344 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:19:29.940356 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:19:29.940366 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:19:29.940377 | orchestrator | 2026-02-17 05:19:29.940387 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-17 05:19:29.940398 | orchestrator | Tuesday 17 February 2026 05:19:19 +0000 (0:00:02.345) 0:03:23.591 ****** 2026-02-17 05:19:29.940417 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:19:29.940428 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:19:29.940439 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:19:29.940449 | orchestrator | 2026-02-17 05:19:29.940460 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-17 05:19:29.940471 | orchestrator | Tuesday 17 February 2026 05:19:22 +0000 (0:00:02.992) 0:03:26.583 ****** 2026-02-17 05:19:29.940482 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:29.940494 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:29.940507 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:29.940519 | orchestrator | 2026-02-17 05:19:29.940533 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-17 05:19:29.940545 | orchestrator | Tuesday 17 February 2026 05:19:23 +0000 (0:00:01.409) 0:03:27.992 ****** 2026-02-17 05:19:29.940558 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:19:29.940570 | orchestrator | 2026-02-17 05:19:29.940581 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-17 05:19:29.940594 | orchestrator | Tuesday 17 February 2026 05:19:25 +0000 (0:00:01.772) 0:03:29.765 ****** 2026-02-17 05:19:29.940639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:19:29.940675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:19:47.408577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:19:47.408694 | orchestrator | 2026-02-17 05:19:47.408702 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-17 05:19:47.408708 | orchestrator | Tuesday 17 February 2026 05:19:29 +0000 (0:00:04.513) 0:03:34.278 ****** 2026-02-17 05:19:47.408714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:19:47.408733 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:47.408739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:19:47.408743 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:47.408748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:19:47.408752 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:47.408756 | orchestrator | 2026-02-17 05:19:47.408760 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-17 05:19:47.408764 | orchestrator | Tuesday 17 February 2026 05:19:31 +0000 (0:00:01.757) 0:03:36.036 ****** 2026-02-17 05:19:47.408770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:47.408777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:47.408782 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:47.408801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:47.408808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:47.408813 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:47.408817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:47.408825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:19:47.408829 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:47.408833 | orchestrator | 2026-02-17 05:19:47.408837 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-17 05:19:47.408841 | orchestrator | Tuesday 17 February 2026 05:19:33 +0000 (0:00:01.518) 0:03:37.554 ****** 2026-02-17 05:19:47.408845 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:19:47.408850 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:19:47.408854 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:19:47.408858 | orchestrator | 2026-02-17 05:19:47.408861 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-17 05:19:47.408865 | orchestrator | Tuesday 17 February 2026 05:19:35 +0000 (0:00:02.457) 0:03:40.012 ****** 2026-02-17 05:19:47.408869 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:19:47.408873 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:19:47.408877 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:19:47.408881 | orchestrator | 2026-02-17 05:19:47.408885 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-17 05:19:47.408889 | orchestrator | Tuesday 17 February 2026 05:19:38 +0000 (0:00:03.208) 0:03:43.220 ****** 2026-02-17 05:19:47.408893 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:47.408897 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:47.408901 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:47.408905 | orchestrator | 2026-02-17 05:19:47.408909 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-17 05:19:47.408913 | orchestrator | Tuesday 17 February 2026 05:19:40 +0000 (0:00:01.405) 0:03:44.626 ****** 2026-02-17 05:19:47.408917 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:19:47.408921 | orchestrator | 2026-02-17 05:19:47.408925 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-17 05:19:47.408929 | orchestrator | Tuesday 17 February 2026 05:19:42 +0000 (0:00:01.734) 0:03:46.360 ****** 2026-02-17 05:19:47.408943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 05:19:49.363322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 05:19:49.363467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-17 05:19:49.363508 | orchestrator | 2026-02-17 05:19:49.363523 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-17 05:19:49.363536 | orchestrator | Tuesday 17 February 2026 05:19:47 +0000 (0:00:05.386) 0:03:51.747 ****** 2026-02-17 05:19:49.363551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 05:19:49.363565 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:49.363595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 05:19:58.157791 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:58.157911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-17 05:19:58.157954 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:58.157968 | orchestrator | 2026-02-17 05:19:58.157980 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-17 05:19:58.157993 | orchestrator | Tuesday 17 February 2026 05:19:49 +0000 (0:00:01.957) 0:03:53.705 ****** 2026-02-17 05:19:58.158005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-17 05:19:58.158095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 05:19:58.158190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-17 05:19:58.158212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 05:19:58.158224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-17 05:19:58.158239 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:58.158272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-17 05:19:58.158286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 05:19:58.158300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-17 05:19:58.158313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 05:19:58.158326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-17 05:19:58.158339 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:58.158353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-17 05:19:58.158377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 05:19:58.158390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-17 05:19:58.158401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-17 05:19:58.158412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-17 05:19:58.158429 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:58.158440 | orchestrator | 2026-02-17 05:19:58.158451 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-17 05:19:58.158463 | orchestrator | Tuesday 17 February 2026 05:19:51 +0000 (0:00:02.020) 0:03:55.726 ****** 2026-02-17 05:19:58.158474 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:19:58.158486 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:19:58.158497 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:19:58.158508 | orchestrator | 2026-02-17 05:19:58.158519 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-17 05:19:58.158530 | orchestrator | Tuesday 17 February 2026 05:19:53 +0000 (0:00:02.249) 0:03:57.975 ****** 2026-02-17 05:19:58.158542 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:19:58.158553 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:19:58.158564 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:19:58.158575 | orchestrator | 2026-02-17 05:19:58.158586 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-17 05:19:58.158597 | orchestrator | Tuesday 17 February 2026 05:19:56 +0000 (0:00:02.903) 0:04:00.878 ****** 2026-02-17 05:19:58.158608 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:19:58.158619 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:19:58.158654 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:19:58.158666 | orchestrator | 2026-02-17 05:19:58.158677 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-17 05:19:58.158688 | orchestrator | Tuesday 17 February 2026 05:19:57 +0000 (0:00:01.388) 0:04:02.267 ****** 2026-02-17 05:19:58.158706 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:20:08.637954 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:20:08.638155 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:20:08.638187 | orchestrator | 2026-02-17 05:20:08.638202 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-17 05:20:08.638226 | orchestrator | Tuesday 17 February 2026 05:19:59 +0000 (0:00:01.467) 0:04:03.734 ****** 2026-02-17 05:20:08.638239 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:20:08.638250 | orchestrator | 2026-02-17 05:20:08.638262 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-17 05:20:08.638273 | orchestrator | Tuesday 17 February 2026 05:20:01 +0000 (0:00:02.223) 0:04:05.958 ****** 2026-02-17 05:20:08.638291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-17 05:20:08.638330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 05:20:08.638344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 05:20:08.638371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-17 05:20:08.638402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 05:20:08.638415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-17 05:20:08.638439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 05:20:08.638451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 05:20:08.638468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 05:20:08.638480 | orchestrator | 2026-02-17 05:20:08.638494 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-17 05:20:08.638508 | orchestrator | Tuesday 17 February 2026 05:20:06 +0000 (0:00:04.991) 0:04:10.950 ****** 2026-02-17 05:20:08.638530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-17 05:20:10.342953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-17 05:20:10.343061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 05:20:10.343079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 05:20:10.343109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 05:20:10.343123 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:20:10.343136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 05:20:10.343169 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:20:10.343202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-17 05:20:10.343224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-17 05:20:10.343237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-17 05:20:10.343248 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:20:10.343260 | orchestrator | 2026-02-17 05:20:10.343272 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-17 05:20:10.343285 | orchestrator | Tuesday 17 February 2026 05:20:08 +0000 (0:00:02.021) 0:04:12.971 ****** 2026-02-17 05:20:10.343303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-17 05:20:10.343318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-17 05:20:10.343331 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:20:10.343343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-17 05:20:10.343354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-17 05:20:10.343373 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:20:10.343385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-17 05:20:10.343396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-17 05:20:10.343408 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:20:10.343419 | orchestrator | 2026-02-17 05:20:10.343430 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-17 05:20:10.343448 | orchestrator | Tuesday 17 February 2026 05:20:10 +0000 (0:00:01.693) 0:04:14.665 ****** 2026-02-17 05:20:26.319548 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:20:26.319719 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:20:26.319736 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:20:26.319749 | orchestrator | 2026-02-17 05:20:26.319762 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-17 05:20:26.319774 | orchestrator | Tuesday 17 February 2026 05:20:12 +0000 (0:00:02.209) 0:04:16.875 ****** 2026-02-17 05:20:26.319785 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:20:26.319796 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:20:26.319807 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:20:26.319819 | orchestrator | 2026-02-17 05:20:26.319830 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-17 05:20:26.319841 | orchestrator | Tuesday 17 February 2026 05:20:15 +0000 (0:00:03.461) 0:04:20.336 ****** 2026-02-17 05:20:26.319853 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:20:26.319865 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:20:26.319876 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:20:26.319887 | orchestrator | 2026-02-17 05:20:26.319898 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-17 05:20:26.319909 | orchestrator | Tuesday 17 February 2026 05:20:17 +0000 (0:00:01.514) 0:04:21.851 ****** 2026-02-17 05:20:26.319920 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:20:26.319931 | orchestrator | 2026-02-17 05:20:26.319942 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-17 05:20:26.319953 | orchestrator | Tuesday 17 February 2026 05:20:19 +0000 (0:00:01.984) 0:04:23.836 ****** 2026-02-17 05:20:26.319970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:20:26.320004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:20:26.320038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:20:26.320069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:20:26.320148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:20:26.320164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:20:26.320186 | orchestrator | 2026-02-17 05:20:26.320205 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-17 05:20:26.320219 | orchestrator | Tuesday 17 February 2026 05:20:24 +0000 (0:00:05.072) 0:04:28.908 ****** 2026-02-17 05:20:26.320233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:20:26.320256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:20:39.384610 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:20:39.384749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:20:39.384763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:20:39.384791 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:20:39.384812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:20:39.384820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:20:39.384827 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:20:39.384835 | orchestrator | 2026-02-17 05:20:39.384843 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-17 05:20:39.384852 | orchestrator | Tuesday 17 February 2026 05:20:26 +0000 (0:00:01.748) 0:04:30.657 ****** 2026-02-17 05:20:39.384871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:39.384880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:39.384889 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:20:39.384896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:39.384903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:39.384910 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:20:39.384917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:39.384925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:39.384936 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:20:39.384943 | orchestrator | 2026-02-17 05:20:39.384950 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-17 05:20:39.384967 | orchestrator | Tuesday 17 February 2026 05:20:28 +0000 (0:00:02.022) 0:04:32.680 ****** 2026-02-17 05:20:39.384974 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:20:39.384988 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:20:39.384995 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:20:39.385002 | orchestrator | 2026-02-17 05:20:39.385009 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-17 05:20:39.385016 | orchestrator | Tuesday 17 February 2026 05:20:30 +0000 (0:00:02.260) 0:04:34.940 ****** 2026-02-17 05:20:39.385022 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:20:39.385029 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:20:39.385036 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:20:39.385043 | orchestrator | 2026-02-17 05:20:39.385049 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-17 05:20:39.385056 | orchestrator | Tuesday 17 February 2026 05:20:33 +0000 (0:00:02.858) 0:04:37.798 ****** 2026-02-17 05:20:39.385063 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:20:39.385070 | orchestrator | 2026-02-17 05:20:39.385080 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-17 05:20:39.385087 | orchestrator | Tuesday 17 February 2026 05:20:35 +0000 (0:00:02.154) 0:04:39.953 ****** 2026-02-17 05:20:39.385095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:20:39.385104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:20:39.385117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 05:20:41.260352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 05:20:41.260481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:20:41.260530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:20:41.260554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 05:20:41.260567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 05:20:41.260598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:20:41.260618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:20:41.260635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 05:20:41.260647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 05:20:41.260659 | orchestrator | 2026-02-17 05:20:41.260719 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-17 05:20:41.260733 | orchestrator | Tuesday 17 February 2026 05:20:40 +0000 (0:00:04.947) 0:04:44.901 ****** 2026-02-17 05:20:41.260746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:20:41.260765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.588811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.588933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.588949 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:20:44.588979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:20:44.588990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.589002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.589050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.589062 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:20:44.589073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:20:44.589097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.589115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.589132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-17 05:20:44.589149 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:20:44.589160 | orchestrator | 2026-02-17 05:20:44.589171 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-17 05:20:44.589182 | orchestrator | Tuesday 17 February 2026 05:20:42 +0000 (0:00:01.809) 0:04:46.710 ****** 2026-02-17 05:20:44.589202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:44.589215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:44.589226 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:20:44.589237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:20:44.589254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:00.285401 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:00.285540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:00.285561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:00.285575 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:00.285587 | orchestrator | 2026-02-17 05:21:00.285599 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-17 05:21:00.285611 | orchestrator | Tuesday 17 February 2026 05:20:44 +0000 (0:00:02.208) 0:04:48.918 ****** 2026-02-17 05:21:00.285622 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:21:00.285634 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:21:00.285645 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:21:00.285656 | orchestrator | 2026-02-17 05:21:00.285668 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-17 05:21:00.285703 | orchestrator | Tuesday 17 February 2026 05:20:46 +0000 (0:00:02.337) 0:04:51.256 ****** 2026-02-17 05:21:00.285715 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:21:00.285726 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:21:00.285737 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:21:00.285748 | orchestrator | 2026-02-17 05:21:00.285759 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-17 05:21:00.285769 | orchestrator | Tuesday 17 February 2026 05:20:49 +0000 (0:00:02.987) 0:04:54.244 ****** 2026-02-17 05:21:00.285780 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:21:00.285791 | orchestrator | 2026-02-17 05:21:00.285819 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-17 05:21:00.285831 | orchestrator | Tuesday 17 February 2026 05:20:52 +0000 (0:00:02.802) 0:04:57.046 ****** 2026-02-17 05:21:00.285841 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:21:00.285853 | orchestrator | 2026-02-17 05:21:00.285864 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-17 05:21:00.285874 | orchestrator | Tuesday 17 February 2026 05:20:56 +0000 (0:00:03.975) 0:05:01.022 ****** 2026-02-17 05:21:00.285889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:21:00.285945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 05:21:00.285960 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:00.285973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:21:00.285985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 05:21:00.286005 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:00.286084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:21:03.863517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 05:21:03.863627 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:03.863645 | orchestrator | 2026-02-17 05:21:03.863658 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-17 05:21:03.863671 | orchestrator | Tuesday 17 February 2026 05:21:00 +0000 (0:00:03.596) 0:05:04.619 ****** 2026-02-17 05:21:03.863785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:21:03.863825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 05:21:03.863838 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:03.863876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:21:03.863890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 05:21:03.863909 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:03.863922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:21:03.863943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-17 05:21:20.588298 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:20.588407 | orchestrator | 2026-02-17 05:21:20.588423 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-17 05:21:20.588437 | orchestrator | Tuesday 17 February 2026 05:21:03 +0000 (0:00:03.584) 0:05:08.203 ****** 2026-02-17 05:21:20.588451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 05:21:20.588486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 05:21:20.588519 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:20.588532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 05:21:20.588544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 05:21:20.588555 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:20.588566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 05:21:20.588578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-17 05:21:20.588589 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:20.588600 | orchestrator | 2026-02-17 05:21:20.588611 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-17 05:21:20.588622 | orchestrator | Tuesday 17 February 2026 05:21:07 +0000 (0:00:04.087) 0:05:12.291 ****** 2026-02-17 05:21:20.588634 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:21:20.588661 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:21:20.588673 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:21:20.588683 | orchestrator | 2026-02-17 05:21:20.588742 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-17 05:21:20.588755 | orchestrator | Tuesday 17 February 2026 05:21:10 +0000 (0:00:03.050) 0:05:15.341 ****** 2026-02-17 05:21:20.588766 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:20.588777 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:20.588788 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:20.588807 | orchestrator | 2026-02-17 05:21:20.588818 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-17 05:21:20.588831 | orchestrator | Tuesday 17 February 2026 05:21:13 +0000 (0:00:02.659) 0:05:18.000 ****** 2026-02-17 05:21:20.588843 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:20.588856 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:20.588868 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:20.588881 | orchestrator | 2026-02-17 05:21:20.588894 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-17 05:21:20.588905 | orchestrator | Tuesday 17 February 2026 05:21:15 +0000 (0:00:01.456) 0:05:19.457 ****** 2026-02-17 05:21:20.588916 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:21:20.588927 | orchestrator | 2026-02-17 05:21:20.588943 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-17 05:21:20.588955 | orchestrator | Tuesday 17 February 2026 05:21:17 +0000 (0:00:02.204) 0:05:21.661 ****** 2026-02-17 05:21:20.588967 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 05:21:20.588981 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 05:21:20.588992 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 05:21:20.589004 | orchestrator | 2026-02-17 05:21:20.589015 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-17 05:21:20.589027 | orchestrator | Tuesday 17 February 2026 05:21:19 +0000 (0:00:02.688) 0:05:24.350 ****** 2026-02-17 05:21:20.589046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 05:21:35.052220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 05:21:35.052364 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:35.052383 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:35.052396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 05:21:35.052409 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:35.052420 | orchestrator | 2026-02-17 05:21:35.052433 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-17 05:21:35.052445 | orchestrator | Tuesday 17 February 2026 05:21:21 +0000 (0:00:01.818) 0:05:26.168 ****** 2026-02-17 05:21:35.052458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-17 05:21:35.052471 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:35.052483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-17 05:21:35.052495 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:35.052507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-17 05:21:35.052518 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:35.052529 | orchestrator | 2026-02-17 05:21:35.052540 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-17 05:21:35.052552 | orchestrator | Tuesday 17 February 2026 05:21:23 +0000 (0:00:01.399) 0:05:27.567 ****** 2026-02-17 05:21:35.052563 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:35.052574 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:35.052586 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:35.052597 | orchestrator | 2026-02-17 05:21:35.052631 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-17 05:21:35.052643 | orchestrator | Tuesday 17 February 2026 05:21:24 +0000 (0:00:01.473) 0:05:29.040 ****** 2026-02-17 05:21:35.052654 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:35.052666 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:35.052677 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:35.052688 | orchestrator | 2026-02-17 05:21:35.052699 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-17 05:21:35.052735 | orchestrator | Tuesday 17 February 2026 05:21:27 +0000 (0:00:02.510) 0:05:31.551 ****** 2026-02-17 05:21:35.052748 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:35.052761 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:35.052774 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:35.052786 | orchestrator | 2026-02-17 05:21:35.052800 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-17 05:21:35.052813 | orchestrator | Tuesday 17 February 2026 05:21:28 +0000 (0:00:01.390) 0:05:32.941 ****** 2026-02-17 05:21:35.052826 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:21:35.052839 | orchestrator | 2026-02-17 05:21:35.052852 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-17 05:21:35.052865 | orchestrator | Tuesday 17 February 2026 05:21:30 +0000 (0:00:02.064) 0:05:35.006 ****** 2026-02-17 05:21:35.052909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:21:35.052928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:35.052943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-17 05:21:35.052968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-17 05:21:35.052993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:35.253479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:35.253579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:35.253597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 05:21:35.253610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:35.253645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:35.253659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-17 05:21:35.253688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:35.253798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:35.253825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 05:21:35.253860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:35.253880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:21:35.253943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:35.378394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-17 05:21:35.378499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-17 05:21:35.378537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:35.378551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:35.378565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:35.378601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 05:21:35.378614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:35.378627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:21:35.378646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:35.378659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-17 05:21:35.378684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:36.694198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:36.694293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-17 05:21:36.694334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:36.694348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-17 05:21:36.694388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 05:21:36.694403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:36.694422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:36.694434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:36.694446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:36.694457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 05:21:36.694472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:36.694487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:37.825958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-17 05:21:37.826114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:37.826133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:37.826150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 05:21:37.826187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:37.826201 | orchestrator | 2026-02-17 05:21:37.826214 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-17 05:21:37.826227 | orchestrator | Tuesday 17 February 2026 05:21:36 +0000 (0:00:06.026) 0:05:41.032 ****** 2026-02-17 05:21:37.826262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:21:37.826297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:37.826311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-17 05:21:37.826328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-17 05:21:37.826351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:21:37.916098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:37.916208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:37.916235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:37.916258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-17 05:21:37.916303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:37.916370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 05:21:37.916384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-17 05:21:37.916398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:37.916411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:37.916428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:37.916449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:37.916469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-17 05:21:38.123638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:38.123772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:38.123788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 05:21:38.123800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:38.123828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:38.123876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 05:21:38.123890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:38.123901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:38.123912 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:38.123924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-17 05:21:38.123936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:38.123953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:38.124001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 05:21:39.381294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:21:39.381400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:39.381434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:39.381470 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:39.381485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-17 05:21:39.381517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-17 05:21:39.381530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:39.381542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:39.381555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:39.381580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-17 05:21:39.381593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:39.381614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:54.842824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-17 05:21:54.842942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-17 05:21:54.842959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-17 05:21:54.843015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-17 05:21:54.843032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-17 05:21:54.843045 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:54.843059 | orchestrator | 2026-02-17 05:21:54.843071 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-17 05:21:54.843083 | orchestrator | Tuesday 17 February 2026 05:21:39 +0000 (0:00:02.687) 0:05:43.719 ****** 2026-02-17 05:21:54.843095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:54.843126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:54.843140 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:21:54.843151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:54.843163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:54.843174 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:21:54.843185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:54.843196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:21:54.843219 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:21:54.843230 | orchestrator | 2026-02-17 05:21:54.843242 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-17 05:21:54.843252 | orchestrator | Tuesday 17 February 2026 05:21:42 +0000 (0:00:02.884) 0:05:46.604 ****** 2026-02-17 05:21:54.843264 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:21:54.843276 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:21:54.843289 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:21:54.843301 | orchestrator | 2026-02-17 05:21:54.843313 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-17 05:21:54.843326 | orchestrator | Tuesday 17 February 2026 05:21:44 +0000 (0:00:02.282) 0:05:48.887 ****** 2026-02-17 05:21:54.843338 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:21:54.843350 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:21:54.843363 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:21:54.843375 | orchestrator | 2026-02-17 05:21:54.843387 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-17 05:21:54.843401 | orchestrator | Tuesday 17 February 2026 05:21:47 +0000 (0:00:02.969) 0:05:51.857 ****** 2026-02-17 05:21:54.843413 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:21:54.843425 | orchestrator | 2026-02-17 05:21:54.843437 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-17 05:21:54.843449 | orchestrator | Tuesday 17 February 2026 05:21:50 +0000 (0:00:02.622) 0:05:54.480 ****** 2026-02-17 05:21:54.843469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-17 05:21:54.843494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-17 05:22:11.977634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-17 05:22:11.977853 | orchestrator | 2026-02-17 05:22:11.977874 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-17 05:22:11.977887 | orchestrator | Tuesday 17 February 2026 05:21:54 +0000 (0:00:04.700) 0:05:59.180 ****** 2026-02-17 05:22:11.977916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-17 05:22:11.977929 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:22:11.977942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-17 05:22:11.977954 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:22:11.977986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-17 05:22:11.978007 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:22:11.978082 | orchestrator | 2026-02-17 05:22:11.978096 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-17 05:22:11.978107 | orchestrator | Tuesday 17 February 2026 05:21:56 +0000 (0:00:01.654) 0:06:00.835 ****** 2026-02-17 05:22:11.978121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:22:11.978137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:22:11.978152 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:22:11.978165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:22:11.978178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:22:11.978191 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:22:11.978203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:22:11.978222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:22:11.978235 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:22:11.978248 | orchestrator | 2026-02-17 05:22:11.978261 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-17 05:22:11.978273 | orchestrator | Tuesday 17 February 2026 05:21:58 +0000 (0:00:01.823) 0:06:02.659 ****** 2026-02-17 05:22:11.978286 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:22:11.978299 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:22:11.978318 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:22:11.978343 | orchestrator | 2026-02-17 05:22:11.978370 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-17 05:22:11.978390 | orchestrator | Tuesday 17 February 2026 05:22:00 +0000 (0:00:02.292) 0:06:04.951 ****** 2026-02-17 05:22:11.978409 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:22:11.978428 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:22:11.978448 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:22:11.978466 | orchestrator | 2026-02-17 05:22:11.978486 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-17 05:22:11.978506 | orchestrator | Tuesday 17 February 2026 05:22:03 +0000 (0:00:02.936) 0:06:07.887 ****** 2026-02-17 05:22:11.978525 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:22:11.978544 | orchestrator | 2026-02-17 05:22:11.978562 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-17 05:22:11.978594 | orchestrator | Tuesday 17 February 2026 05:22:06 +0000 (0:00:02.517) 0:06:10.405 ****** 2026-02-17 05:22:11.978628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:22:13.184035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:22:13.184158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:22:13.184178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:22:13.184214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.184248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.184262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:22:13.184280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.184292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:22:13.184311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.184331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.910433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.910537 | orchestrator | 2026-02-17 05:22:13.910554 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-17 05:22:13.910568 | orchestrator | Tuesday 17 February 2026 05:22:13 +0000 (0:00:07.118) 0:06:17.524 ****** 2026-02-17 05:22:13.910603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:22:13.910618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:22:13.910732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.910769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.910781 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:22:13.910795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:22:13.910814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:22:13.910835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.910847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:22:13.910858 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:22:13.910878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:22:34.885001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:22:34.885130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-17 05:22:34.885171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-17 05:22:34.885185 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:22:34.885199 | orchestrator | 2026-02-17 05:22:34.885211 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-17 05:22:34.885223 | orchestrator | Tuesday 17 February 2026 05:22:15 +0000 (0:00:01.990) 0:06:19.514 ****** 2026-02-17 05:22:34.885236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885287 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:22:34.885299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885395 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:22:34.885406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:22:34.885480 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:22:34.885491 | orchestrator | 2026-02-17 05:22:34.885505 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-17 05:22:34.885518 | orchestrator | Tuesday 17 February 2026 05:22:18 +0000 (0:00:03.022) 0:06:22.537 ****** 2026-02-17 05:22:34.885531 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:22:34.885544 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:22:34.885578 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:22:34.885591 | orchestrator | 2026-02-17 05:22:34.885603 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-17 05:22:34.885616 | orchestrator | Tuesday 17 February 2026 05:22:20 +0000 (0:00:02.352) 0:06:24.889 ****** 2026-02-17 05:22:34.885629 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:22:34.885640 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:22:34.885653 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:22:34.885670 | orchestrator | 2026-02-17 05:22:34.885706 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-17 05:22:34.885727 | orchestrator | Tuesday 17 February 2026 05:22:23 +0000 (0:00:03.180) 0:06:28.070 ****** 2026-02-17 05:22:34.885745 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:22:34.885764 | orchestrator | 2026-02-17 05:22:34.885782 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-17 05:22:34.885801 | orchestrator | Tuesday 17 February 2026 05:22:26 +0000 (0:00:02.914) 0:06:30.984 ****** 2026-02-17 05:22:34.885820 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-17 05:22:34.885840 | orchestrator | 2026-02-17 05:22:34.885859 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-17 05:22:34.885878 | orchestrator | Tuesday 17 February 2026 05:22:28 +0000 (0:00:01.744) 0:06:32.729 ****** 2026-02-17 05:22:34.885898 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-17 05:22:34.885919 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-17 05:22:34.885952 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-17 05:22:54.673373 | orchestrator | 2026-02-17 05:22:54.673518 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-17 05:22:54.673528 | orchestrator | Tuesday 17 February 2026 05:22:34 +0000 (0:00:06.485) 0:06:39.215 ****** 2026-02-17 05:22:54.673548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:22:54.673554 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:22:54.673560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:22:54.673564 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:22:54.673568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:22:54.673572 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:22:54.673576 | orchestrator | 2026-02-17 05:22:54.673581 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-17 05:22:54.673585 | orchestrator | Tuesday 17 February 2026 05:22:37 +0000 (0:00:02.565) 0:06:41.781 ****** 2026-02-17 05:22:54.673590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 05:22:54.673597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 05:22:54.673602 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:22:54.673606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 05:22:54.673610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 05:22:54.673614 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:22:54.673631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 05:22:54.673635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-17 05:22:54.673639 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:22:54.673643 | orchestrator | 2026-02-17 05:22:54.673647 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-17 05:22:54.673651 | orchestrator | Tuesday 17 February 2026 05:22:39 +0000 (0:00:02.514) 0:06:44.296 ****** 2026-02-17 05:22:54.673655 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:22:54.673660 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:22:54.673663 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:22:54.673667 | orchestrator | 2026-02-17 05:22:54.673671 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-17 05:22:54.673675 | orchestrator | Tuesday 17 February 2026 05:22:43 +0000 (0:00:03.843) 0:06:48.139 ****** 2026-02-17 05:22:54.673679 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:22:54.673682 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:22:54.673696 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:22:54.673700 | orchestrator | 2026-02-17 05:22:54.673703 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-17 05:22:54.673707 | orchestrator | Tuesday 17 February 2026 05:22:47 +0000 (0:00:04.021) 0:06:52.161 ****** 2026-02-17 05:22:54.673712 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-17 05:22:54.673717 | orchestrator | 2026-02-17 05:22:54.673721 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-17 05:22:54.673725 | orchestrator | Tuesday 17 February 2026 05:22:49 +0000 (0:00:01.669) 0:06:53.830 ****** 2026-02-17 05:22:54.673733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:22:54.673738 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:22:54.673742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:22:54.673746 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:22:54.673750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:22:54.673754 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:22:54.673762 | orchestrator | 2026-02-17 05:22:54.673766 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-17 05:22:54.673770 | orchestrator | Tuesday 17 February 2026 05:22:52 +0000 (0:00:02.558) 0:06:56.389 ****** 2026-02-17 05:22:54.673774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:22:54.673778 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:22:54.673782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:22:54.673786 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:22:54.673792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-17 05:23:29.718241 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:23:29.718399 | orchestrator | 2026-02-17 05:23:29.718418 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-17 05:23:29.718431 | orchestrator | Tuesday 17 February 2026 05:22:54 +0000 (0:00:02.612) 0:06:59.002 ****** 2026-02-17 05:23:29.718444 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:23:29.718455 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:23:29.718466 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:23:29.718478 | orchestrator | 2026-02-17 05:23:29.718489 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-17 05:23:29.718501 | orchestrator | Tuesday 17 February 2026 05:22:57 +0000 (0:00:02.411) 0:07:01.414 ****** 2026-02-17 05:23:29.718512 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:23:29.718524 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:23:29.718535 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:23:29.718546 | orchestrator | 2026-02-17 05:23:29.718576 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-17 05:23:29.718587 | orchestrator | Tuesday 17 February 2026 05:23:00 +0000 (0:00:03.651) 0:07:05.066 ****** 2026-02-17 05:23:29.718599 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:23:29.718609 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:23:29.718620 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:23:29.718631 | orchestrator | 2026-02-17 05:23:29.718642 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-17 05:23:29.718653 | orchestrator | Tuesday 17 February 2026 05:23:04 +0000 (0:00:03.999) 0:07:09.065 ****** 2026-02-17 05:23:29.718665 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-17 05:23:29.718676 | orchestrator | 2026-02-17 05:23:29.718688 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-17 05:23:29.718698 | orchestrator | Tuesday 17 February 2026 05:23:07 +0000 (0:00:02.436) 0:07:11.502 ****** 2026-02-17 05:23:29.718740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 05:23:29.718765 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:23:29.718785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 05:23:29.718803 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:23:29.718822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 05:23:29.718842 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:23:29.718864 | orchestrator | 2026-02-17 05:23:29.718885 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-17 05:23:29.718900 | orchestrator | Tuesday 17 February 2026 05:23:09 +0000 (0:00:02.428) 0:07:13.930 ****** 2026-02-17 05:23:29.718912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 05:23:29.718923 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:23:29.718955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 05:23:29.718967 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:23:29.718986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-17 05:23:29.719002 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:23:29.719027 | orchestrator | 2026-02-17 05:23:29.719038 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-17 05:23:29.719050 | orchestrator | Tuesday 17 February 2026 05:23:11 +0000 (0:00:02.399) 0:07:16.329 ****** 2026-02-17 05:23:29.719061 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:23:29.719072 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:23:29.719082 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:23:29.719093 | orchestrator | 2026-02-17 05:23:29.719104 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-17 05:23:29.719115 | orchestrator | Tuesday 17 February 2026 05:23:14 +0000 (0:00:02.554) 0:07:18.884 ****** 2026-02-17 05:23:29.719126 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:23:29.719137 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:23:29.719147 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:23:29.719158 | orchestrator | 2026-02-17 05:23:29.719169 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-17 05:23:29.719180 | orchestrator | Tuesday 17 February 2026 05:23:18 +0000 (0:00:03.594) 0:07:22.478 ****** 2026-02-17 05:23:29.719191 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:23:29.719201 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:23:29.719212 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:23:29.719223 | orchestrator | 2026-02-17 05:23:29.719234 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-17 05:23:29.719245 | orchestrator | Tuesday 17 February 2026 05:23:22 +0000 (0:00:04.575) 0:07:27.054 ****** 2026-02-17 05:23:29.719256 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:23:29.719275 | orchestrator | 2026-02-17 05:23:29.719293 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-17 05:23:29.719342 | orchestrator | Tuesday 17 February 2026 05:23:25 +0000 (0:00:02.630) 0:07:29.685 ****** 2026-02-17 05:23:29.719363 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 05:23:29.719382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 05:23:29.719413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 05:23:31.085506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 05:23:31.085613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:23:31.085632 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 05:23:31.085647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 05:23:31.085659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 05:23:31.085689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 05:23:31.085729 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-17 05:23:31.085742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:23:31.085754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 05:23:31.085766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 05:23:31.085777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 05:23:31.085789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:23:31.085809 | orchestrator | 2026-02-17 05:23:31.085830 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-17 05:23:32.300849 | orchestrator | Tuesday 17 February 2026 05:23:31 +0000 (0:00:05.730) 0:07:35.415 ****** 2026-02-17 05:23:32.300997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 05:23:32.301023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 05:23:32.301037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 05:23:32.301051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 05:23:32.301064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 05:23:32.301115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 05:23:32.301128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 05:23:32.301140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 05:23:32.301152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:23:32.301164 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:23:32.301209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:23:32.301222 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:23:32.301234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-17 05:23:32.301267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-17 05:23:51.286400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-17 05:23:51.286495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-17 05:23:51.286507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-17 05:23:51.286516 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:23:51.286525 | orchestrator | 2026-02-17 05:23:51.286534 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-17 05:23:51.286543 | orchestrator | Tuesday 17 February 2026 05:23:33 +0000 (0:00:02.356) 0:07:37.773 ****** 2026-02-17 05:23:51.286551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 05:23:51.286562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 05:23:51.286589 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:23:51.286597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 05:23:51.286604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 05:23:51.286612 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:23:51.286620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 05:23:51.286627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-17 05:23:51.286635 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:23:51.286642 | orchestrator | 2026-02-17 05:23:51.286649 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-17 05:23:51.286657 | orchestrator | Tuesday 17 February 2026 05:23:35 +0000 (0:00:02.407) 0:07:40.180 ****** 2026-02-17 05:23:51.286664 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:23:51.286672 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:23:51.286680 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:23:51.286687 | orchestrator | 2026-02-17 05:23:51.286694 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-17 05:23:51.286702 | orchestrator | Tuesday 17 February 2026 05:23:38 +0000 (0:00:02.379) 0:07:42.560 ****** 2026-02-17 05:23:51.286709 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:23:51.286716 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:23:51.286750 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:23:51.286758 | orchestrator | 2026-02-17 05:23:51.286766 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-17 05:23:51.286773 | orchestrator | Tuesday 17 February 2026 05:23:41 +0000 (0:00:03.056) 0:07:45.617 ****** 2026-02-17 05:23:51.286781 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:23:51.286789 | orchestrator | 2026-02-17 05:23:51.286796 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-17 05:23:51.286804 | orchestrator | Tuesday 17 February 2026 05:23:44 +0000 (0:00:02.751) 0:07:48.368 ****** 2026-02-17 05:23:51.286812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:23:51.286824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:23:51.286838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:23:51.286857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:23:53.711377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:23:53.712442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:23:53.712506 | orchestrator | 2026-02-17 05:23:53.712522 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-17 05:23:53.712534 | orchestrator | Tuesday 17 February 2026 05:23:51 +0000 (0:00:07.255) 0:07:55.624 ****** 2026-02-17 05:23:53.712547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:23:53.712598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:23:53.712612 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:23:53.712625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:23:53.712645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:23:53.712656 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:23:53.712668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:23:53.712694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:24:04.587517 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:04.587634 | orchestrator | 2026-02-17 05:24:04.587653 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-17 05:24:04.587687 | orchestrator | Tuesday 17 February 2026 05:23:53 +0000 (0:00:02.413) 0:07:58.038 ****** 2026-02-17 05:24:04.587702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:04.587717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-17 05:24:04.587732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-17 05:24:04.587746 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:04.587757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:04.587769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-17 05:24:04.587781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-17 05:24:04.587792 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:04.587804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:04.587815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-17 05:24:04.587827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-17 05:24:04.587838 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:04.587849 | orchestrator | 2026-02-17 05:24:04.587861 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-17 05:24:04.587872 | orchestrator | Tuesday 17 February 2026 05:23:55 +0000 (0:00:01.828) 0:07:59.867 ****** 2026-02-17 05:24:04.587883 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:04.587895 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:04.587921 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:04.587932 | orchestrator | 2026-02-17 05:24:04.587944 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-17 05:24:04.587955 | orchestrator | Tuesday 17 February 2026 05:23:56 +0000 (0:00:01.452) 0:08:01.319 ****** 2026-02-17 05:24:04.587966 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:04.587977 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:04.587988 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:04.587999 | orchestrator | 2026-02-17 05:24:04.588018 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-17 05:24:04.588030 | orchestrator | Tuesday 17 February 2026 05:23:59 +0000 (0:00:02.372) 0:08:03.691 ****** 2026-02-17 05:24:04.588042 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:24:04.588056 | orchestrator | 2026-02-17 05:24:04.588069 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-17 05:24:04.588082 | orchestrator | Tuesday 17 February 2026 05:24:01 +0000 (0:00:02.657) 0:08:06.349 ****** 2026-02-17 05:24:04.588117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-17 05:24:04.588136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 05:24:04.588177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:04.588193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:04.588208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 05:24:04.588237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-17 05:24:06.667809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 05:24:06.667921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:06.667940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-17 05:24:06.667954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:06.667986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 05:24:06.668021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 05:24:06.668052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:06.668065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:06.668076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 05:24:06.668088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:24:06.668107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-17 05:24:06.668128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:06.668197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:24:08.919332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:08.919438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-17 05:24:08.919455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 05:24:08.919506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:08.919521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:24:08.919535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:08.919566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-17 05:24:08.919578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 05:24:08.919590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:08.919615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:08.919627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 05:24:08.919640 | orchestrator | 2026-02-17 05:24:08.919654 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-17 05:24:08.919666 | orchestrator | Tuesday 17 February 2026 05:24:07 +0000 (0:00:05.893) 0:08:12.243 ****** 2026-02-17 05:24:08.919687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-17 05:24:09.082301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 05:24:09.082418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:09.082470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:09.082498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 05:24:09.082513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:24:09.082550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-17 05:24:09.082571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:09.082588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:09.082627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-17 05:24:09.082647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 05:24:09.082664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 05:24:09.082681 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:09.082700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:09.082731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:10.283908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 05:24:10.284057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:24:10.284092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-17 05:24:10.284105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:10.284116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:10.284178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-17 05:24:10.284198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 05:24:10.284214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-17 05:24:10.284226 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:10.284239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:10.284253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:10.284270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-17 05:24:10.284298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:24:23.138807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-17 05:24:23.138923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:23.138939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:24:23.138949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-17 05:24:23.138959 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:23.138971 | orchestrator | 2026-02-17 05:24:23.138981 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-17 05:24:23.138991 | orchestrator | Tuesday 17 February 2026 05:24:10 +0000 (0:00:02.382) 0:08:14.625 ****** 2026-02-17 05:24:23.139001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-17 05:24:23.139013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-17 05:24:23.139042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:23.139067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:23.139122 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:23.139132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-17 05:24:23.139141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-17 05:24:23.139150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:23.139164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:23.139174 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:23.139183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-17 05:24:23.139192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-17 05:24:23.139201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:23.139210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-17 05:24:23.139219 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:23.139228 | orchestrator | 2026-02-17 05:24:23.139242 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-17 05:24:23.139251 | orchestrator | Tuesday 17 February 2026 05:24:12 +0000 (0:00:01.942) 0:08:16.568 ****** 2026-02-17 05:24:23.139260 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:23.139268 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:23.139277 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:23.139285 | orchestrator | 2026-02-17 05:24:23.139294 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-17 05:24:23.139303 | orchestrator | Tuesday 17 February 2026 05:24:14 +0000 (0:00:02.029) 0:08:18.598 ****** 2026-02-17 05:24:23.139312 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:23.139321 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:23.139329 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:23.139339 | orchestrator | 2026-02-17 05:24:23.139349 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-17 05:24:23.139359 | orchestrator | Tuesday 17 February 2026 05:24:16 +0000 (0:00:02.531) 0:08:21.129 ****** 2026-02-17 05:24:23.139369 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:24:23.139379 | orchestrator | 2026-02-17 05:24:23.139389 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-17 05:24:23.139399 | orchestrator | Tuesday 17 February 2026 05:24:19 +0000 (0:00:02.406) 0:08:23.536 ****** 2026-02-17 05:24:23.139416 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:24:41.519469 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:24:41.519587 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:24:41.519626 | orchestrator | 2026-02-17 05:24:41.519642 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-17 05:24:41.519655 | orchestrator | Tuesday 17 February 2026 05:24:23 +0000 (0:00:03.933) 0:08:27.469 ****** 2026-02-17 05:24:41.519667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:24:41.519680 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:41.519710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:24:41.519723 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:41.519742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:24:41.519754 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:41.519765 | orchestrator | 2026-02-17 05:24:41.519777 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-17 05:24:41.519797 | orchestrator | Tuesday 17 February 2026 05:24:24 +0000 (0:00:01.487) 0:08:28.956 ****** 2026-02-17 05:24:41.519808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-17 05:24:41.519821 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:41.519832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-17 05:24:41.519843 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:41.519854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-17 05:24:41.519865 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:41.519876 | orchestrator | 2026-02-17 05:24:41.519887 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-17 05:24:41.519898 | orchestrator | Tuesday 17 February 2026 05:24:26 +0000 (0:00:01.540) 0:08:30.497 ****** 2026-02-17 05:24:41.519909 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:41.519921 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:41.519931 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:41.519942 | orchestrator | 2026-02-17 05:24:41.519953 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-17 05:24:41.519964 | orchestrator | Tuesday 17 February 2026 05:24:28 +0000 (0:00:01.916) 0:08:32.414 ****** 2026-02-17 05:24:41.519975 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:41.519986 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:24:41.520036 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:24:41.520052 | orchestrator | 2026-02-17 05:24:41.520070 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-17 05:24:41.520091 | orchestrator | Tuesday 17 February 2026 05:24:30 +0000 (0:00:02.489) 0:08:34.903 ****** 2026-02-17 05:24:41.520111 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:24:41.520124 | orchestrator | 2026-02-17 05:24:41.520137 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-17 05:24:41.520149 | orchestrator | Tuesday 17 February 2026 05:24:32 +0000 (0:00:02.362) 0:08:37.266 ****** 2026-02-17 05:24:41.520164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-17 05:24:41.520195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-17 05:24:43.242649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-17 05:24:43.242762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-17 05:24:43.242781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-17 05:24:43.242831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-17 05:24:43.242866 | orchestrator | 2026-02-17 05:24:43.242881 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-17 05:24:43.242895 | orchestrator | Tuesday 17 February 2026 05:24:41 +0000 (0:00:08.590) 0:08:45.857 ****** 2026-02-17 05:24:43.242908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-17 05:24:43.242920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-17 05:24:43.242933 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:24:43.242945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-17 05:24:43.242979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-17 05:25:05.860314 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:25:05.860431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-17 05:25:05.860452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-17 05:25:05.860466 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:25:05.860478 | orchestrator | 2026-02-17 05:25:05.860491 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-17 05:25:05.860503 | orchestrator | Tuesday 17 February 2026 05:24:43 +0000 (0:00:01.727) 0:08:47.584 ****** 2026-02-17 05:25:05.860516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-17 05:25:05.860531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-17 05:25:05.860571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:25:05.860592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:25:05.860610 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:25:05.860643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-17 05:25:05.860661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-17 05:25:05.860700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:25:05.860795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:25:05.860816 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:25:05.860827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-17 05:25:05.860839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-17 05:25:05.860853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:25:05.860868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-17 05:25:05.860881 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:25:05.860894 | orchestrator | 2026-02-17 05:25:05.860930 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-17 05:25:05.860948 | orchestrator | Tuesday 17 February 2026 05:24:45 +0000 (0:00:02.420) 0:08:50.004 ****** 2026-02-17 05:25:05.860962 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:25:05.860974 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:25:05.860987 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:25:05.861000 | orchestrator | 2026-02-17 05:25:05.861012 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-17 05:25:05.861036 | orchestrator | Tuesday 17 February 2026 05:24:47 +0000 (0:00:02.218) 0:08:52.223 ****** 2026-02-17 05:25:05.861049 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:25:05.861062 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:25:05.861075 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:25:05.861089 | orchestrator | 2026-02-17 05:25:05.861108 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-17 05:25:05.861136 | orchestrator | Tuesday 17 February 2026 05:24:50 +0000 (0:00:03.048) 0:08:55.272 ****** 2026-02-17 05:25:05.861156 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:25:05.861173 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:25:05.861192 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:25:05.861208 | orchestrator | 2026-02-17 05:25:05.861227 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-17 05:25:05.861245 | orchestrator | Tuesday 17 February 2026 05:24:52 +0000 (0:00:01.495) 0:08:56.767 ****** 2026-02-17 05:25:05.861264 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:25:05.861282 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:25:05.861300 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:25:05.861318 | orchestrator | 2026-02-17 05:25:05.861337 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-17 05:25:05.861356 | orchestrator | Tuesday 17 February 2026 05:24:54 +0000 (0:00:01.598) 0:08:58.366 ****** 2026-02-17 05:25:05.861374 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:25:05.861393 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:25:05.861411 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:25:05.861429 | orchestrator | 2026-02-17 05:25:05.861448 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-17 05:25:05.861477 | orchestrator | Tuesday 17 February 2026 05:24:55 +0000 (0:00:01.982) 0:09:00.348 ****** 2026-02-17 05:25:05.861496 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:25:05.861514 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:25:05.861533 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:25:05.861551 | orchestrator | 2026-02-17 05:25:05.861570 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-17 05:25:05.861589 | orchestrator | Tuesday 17 February 2026 05:24:57 +0000 (0:00:01.419) 0:09:01.768 ****** 2026-02-17 05:25:05.861608 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:25:05.861627 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:25:05.861645 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:25:05.861664 | orchestrator | 2026-02-17 05:25:05.861683 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-17 05:25:05.861702 | orchestrator | Tuesday 17 February 2026 05:24:58 +0000 (0:00:01.436) 0:09:03.204 ****** 2026-02-17 05:25:05.861722 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:25:05.861741 | orchestrator | 2026-02-17 05:25:05.861760 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-17 05:25:05.861778 | orchestrator | Tuesday 17 February 2026 05:25:01 +0000 (0:00:02.921) 0:09:06.126 ****** 2026-02-17 05:25:05.861815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-17 05:25:09.923355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-17 05:25:09.923468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-17 05:25:09.923481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:25:09.923491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:25:09.923513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-17 05:25:09.923524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:25:09.923551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:25:09.923567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-17 05:25:09.923577 | orchestrator | 2026-02-17 05:25:09.923588 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-17 05:25:09.923598 | orchestrator | Tuesday 17 February 2026 05:25:05 +0000 (0:00:04.074) 0:09:10.201 ****** 2026-02-17 05:25:09.923608 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:25:09.923619 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:25:09.923628 | orchestrator | } 2026-02-17 05:25:09.923637 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:25:09.923646 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:25:09.923655 | orchestrator | } 2026-02-17 05:25:09.923664 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:25:09.923672 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:25:09.923681 | orchestrator | } 2026-02-17 05:25:09.923690 | orchestrator | 2026-02-17 05:25:09.923699 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:25:09.923708 | orchestrator | Tuesday 17 February 2026 05:25:07 +0000 (0:00:01.538) 0:09:11.739 ****** 2026-02-17 05:25:09.923718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-17 05:25:09.923732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:25:09.923742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:25:09.923751 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:25:09.923761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-17 05:25:09.923782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:27:11.774601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:27:11.774729 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:11.774756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-17 05:27:11.774778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-17 05:27:11.774817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-17 05:27:11.774840 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:11.774860 | orchestrator | 2026-02-17 05:27:11.774879 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-17 05:27:11.774900 | orchestrator | Tuesday 17 February 2026 05:25:09 +0000 (0:00:02.520) 0:09:14.259 ****** 2026-02-17 05:27:11.774920 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:11.774970 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:11.774990 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:11.775007 | orchestrator | 2026-02-17 05:27:11.775027 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-17 05:27:11.775045 | orchestrator | Tuesday 17 February 2026 05:25:11 +0000 (0:00:01.796) 0:09:16.055 ****** 2026-02-17 05:27:11.775064 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:11.775084 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:11.775103 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:11.775122 | orchestrator | 2026-02-17 05:27:11.775141 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-17 05:27:11.775160 | orchestrator | Tuesday 17 February 2026 05:25:13 +0000 (0:00:01.488) 0:09:17.544 ****** 2026-02-17 05:27:11.775179 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:27:11.775198 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:27:11.775216 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:27:11.775235 | orchestrator | 2026-02-17 05:27:11.775255 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-17 05:27:11.775275 | orchestrator | Tuesday 17 February 2026 05:25:20 +0000 (0:00:07.141) 0:09:24.685 ****** 2026-02-17 05:27:11.775294 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:27:11.775309 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:27:11.775328 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:27:11.775346 | orchestrator | 2026-02-17 05:27:11.775364 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-17 05:27:11.775383 | orchestrator | Tuesday 17 February 2026 05:25:27 +0000 (0:00:07.431) 0:09:32.117 ****** 2026-02-17 05:27:11.775394 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:27:11.775405 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:27:11.775416 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:27:11.775426 | orchestrator | 2026-02-17 05:27:11.775439 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-17 05:27:11.775458 | orchestrator | Tuesday 17 February 2026 05:25:34 +0000 (0:00:07.179) 0:09:39.297 ****** 2026-02-17 05:27:11.775476 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:27:11.775495 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:27:11.775607 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:27:11.775629 | orchestrator | 2026-02-17 05:27:11.775660 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-17 05:27:11.775672 | orchestrator | Tuesday 17 February 2026 05:25:42 +0000 (0:00:08.019) 0:09:47.316 ****** 2026-02-17 05:27:11.775683 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:11.775693 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:11.775706 | orchestrator | 2026-02-17 05:27:11.775724 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-17 05:27:11.775743 | orchestrator | Tuesday 17 February 2026 05:25:46 +0000 (0:00:03.773) 0:09:51.090 ****** 2026-02-17 05:27:11.775762 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:27:11.775781 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:27:11.775801 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:27:11.775813 | orchestrator | 2026-02-17 05:27:11.775823 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-17 05:27:11.775834 | orchestrator | Tuesday 17 February 2026 05:26:00 +0000 (0:00:13.750) 0:10:04.841 ****** 2026-02-17 05:27:11.775845 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:11.775856 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:11.775866 | orchestrator | 2026-02-17 05:27:11.775877 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-17 05:27:11.775888 | orchestrator | Tuesday 17 February 2026 05:26:05 +0000 (0:00:04.567) 0:10:09.408 ****** 2026-02-17 05:27:11.775899 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:27:11.775909 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:27:11.775920 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:27:11.775931 | orchestrator | 2026-02-17 05:27:11.775942 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-17 05:27:11.775967 | orchestrator | Tuesday 17 February 2026 05:26:12 +0000 (0:00:07.466) 0:10:16.875 ****** 2026-02-17 05:27:11.775985 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:11.776003 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:11.776022 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:27:11.776039 | orchestrator | 2026-02-17 05:27:11.776050 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-17 05:27:11.776061 | orchestrator | Tuesday 17 February 2026 05:26:19 +0000 (0:00:06.866) 0:10:23.741 ****** 2026-02-17 05:27:11.776071 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:11.776082 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:11.776093 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:27:11.776108 | orchestrator | 2026-02-17 05:27:11.776125 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-17 05:27:11.776144 | orchestrator | Tuesday 17 February 2026 05:26:26 +0000 (0:00:06.842) 0:10:30.583 ****** 2026-02-17 05:27:11.776163 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:11.776181 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:11.776200 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:27:11.776219 | orchestrator | 2026-02-17 05:27:11.776238 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-17 05:27:11.776257 | orchestrator | Tuesday 17 February 2026 05:26:33 +0000 (0:00:06.835) 0:10:37.419 ****** 2026-02-17 05:27:11.776275 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:11.776295 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:11.776314 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:27:11.776334 | orchestrator | 2026-02-17 05:27:11.776356 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-17 05:27:11.776369 | orchestrator | Tuesday 17 February 2026 05:26:39 +0000 (0:00:06.932) 0:10:44.352 ****** 2026-02-17 05:27:11.776388 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:11.776407 | orchestrator | 2026-02-17 05:27:11.776426 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-17 05:27:11.776445 | orchestrator | Tuesday 17 February 2026 05:26:42 +0000 (0:00:02.590) 0:10:46.942 ****** 2026-02-17 05:27:11.776462 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:11.776480 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:11.776500 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:27:11.776566 | orchestrator | 2026-02-17 05:27:11.776585 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-17 05:27:11.776604 | orchestrator | Tuesday 17 February 2026 05:26:55 +0000 (0:00:13.341) 0:11:00.284 ****** 2026-02-17 05:27:11.776622 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:11.776638 | orchestrator | 2026-02-17 05:27:11.776649 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-17 05:27:11.776660 | orchestrator | Tuesday 17 February 2026 05:26:59 +0000 (0:00:03.665) 0:11:03.949 ****** 2026-02-17 05:27:11.776671 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:11.776682 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:11.776699 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:27:11.776716 | orchestrator | 2026-02-17 05:27:11.776735 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-17 05:27:11.776753 | orchestrator | Tuesday 17 February 2026 05:27:06 +0000 (0:00:07.070) 0:11:11.020 ****** 2026-02-17 05:27:11.776773 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:11.776791 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:11.776808 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:11.776826 | orchestrator | 2026-02-17 05:27:11.776844 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-17 05:27:11.776862 | orchestrator | Tuesday 17 February 2026 05:27:08 +0000 (0:00:02.225) 0:11:13.245 ****** 2026-02-17 05:27:11.776880 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:11.776897 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:11.776915 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:11.776948 | orchestrator | 2026-02-17 05:27:11.776968 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:27:11.776989 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-17 05:27:11.777009 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-17 05:27:11.777038 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-17 05:27:12.723985 | orchestrator | 2026-02-17 05:27:12.724082 | orchestrator | 2026-02-17 05:27:12.724096 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:27:12.724108 | orchestrator | Tuesday 17 February 2026 05:27:11 +0000 (0:00:02.858) 0:11:16.104 ****** 2026-02-17 05:27:12.724118 | orchestrator | =============================================================================== 2026-02-17 05:27:12.724128 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.75s 2026-02-17 05:27:12.724138 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.34s 2026-02-17 05:27:12.724148 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.59s 2026-02-17 05:27:12.724157 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.02s 2026-02-17 05:27:12.724168 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.47s 2026-02-17 05:27:12.724178 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.43s 2026-02-17 05:27:12.724187 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.26s 2026-02-17 05:27:12.724197 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.18s 2026-02-17 05:27:12.724207 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.14s 2026-02-17 05:27:12.724216 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.12s 2026-02-17 05:27:12.724226 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.07s 2026-02-17 05:27:12.724235 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 6.93s 2026-02-17 05:27:12.724245 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.87s 2026-02-17 05:27:12.724254 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.84s 2026-02-17 05:27:12.724264 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.84s 2026-02-17 05:27:12.724274 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.49s 2026-02-17 05:27:12.724283 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.03s 2026-02-17 05:27:12.724293 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.89s 2026-02-17 05:27:12.724302 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.74s 2026-02-17 05:27:12.724312 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.73s 2026-02-17 05:27:13.125194 | orchestrator | + osism apply -a upgrade opensearch 2026-02-17 05:27:15.209674 | orchestrator | 2026-02-17 05:27:15 | INFO  | Task 60badc30-812d-4483-91be-ead5e767b184 (opensearch) was prepared for execution. 2026-02-17 05:27:15.209803 | orchestrator | 2026-02-17 05:27:15 | INFO  | It takes a moment until task 60badc30-812d-4483-91be-ead5e767b184 (opensearch) has been started and output is visible here. 2026-02-17 05:27:33.711315 | orchestrator | 2026-02-17 05:27:33.711432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 05:27:33.711514 | orchestrator | 2026-02-17 05:27:33.711531 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 05:27:33.711543 | orchestrator | Tuesday 17 February 2026 05:27:21 +0000 (0:00:01.701) 0:00:01.701 ****** 2026-02-17 05:27:33.711582 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:33.711595 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:33.711606 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:33.711617 | orchestrator | 2026-02-17 05:27:33.711628 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 05:27:33.711639 | orchestrator | Tuesday 17 February 2026 05:27:22 +0000 (0:00:01.714) 0:00:03.415 ****** 2026-02-17 05:27:33.711651 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-17 05:27:33.711662 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-17 05:27:33.711673 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-17 05:27:33.711684 | orchestrator | 2026-02-17 05:27:33.711695 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-17 05:27:33.711706 | orchestrator | 2026-02-17 05:27:33.711717 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-17 05:27:33.711729 | orchestrator | Tuesday 17 February 2026 05:27:24 +0000 (0:00:02.032) 0:00:05.448 ****** 2026-02-17 05:27:33.711741 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:27:33.711752 | orchestrator | 2026-02-17 05:27:33.711763 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-17 05:27:33.711775 | orchestrator | Tuesday 17 February 2026 05:27:27 +0000 (0:00:02.444) 0:00:07.892 ****** 2026-02-17 05:27:33.711786 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 05:27:33.711797 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 05:27:33.711808 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-17 05:27:33.711819 | orchestrator | 2026-02-17 05:27:33.711830 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-17 05:27:33.711847 | orchestrator | Tuesday 17 February 2026 05:27:29 +0000 (0:00:02.116) 0:00:10.008 ****** 2026-02-17 05:27:33.711876 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:33.711908 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:33.711971 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:33.712013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:33.712040 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:33.712065 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:33.712095 | orchestrator | 2026-02-17 05:27:33.712108 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-17 05:27:33.712121 | orchestrator | Tuesday 17 February 2026 05:27:31 +0000 (0:00:02.375) 0:00:12.384 ****** 2026-02-17 05:27:33.712140 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:27:33.712154 | orchestrator | 2026-02-17 05:27:33.712176 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-17 05:27:39.153240 | orchestrator | Tuesday 17 February 2026 05:27:33 +0000 (0:00:01.777) 0:00:14.162 ****** 2026-02-17 05:27:39.153349 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:39.153367 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:39.153379 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:39.153408 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:39.153518 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:39.153533 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:39.153545 | orchestrator | 2026-02-17 05:27:39.153556 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-17 05:27:39.153567 | orchestrator | Tuesday 17 February 2026 05:27:37 +0000 (0:00:03.504) 0:00:17.666 ****** 2026-02-17 05:27:39.153577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:27:39.153612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:27:41.044864 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:27:41.044955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:27:41.044971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:27:41.044980 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:41.044987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:27:41.045039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:27:41.045048 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:41.045055 | orchestrator | 2026-02-17 05:27:41.045062 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-17 05:27:41.045070 | orchestrator | Tuesday 17 February 2026 05:27:39 +0000 (0:00:01.940) 0:00:19.606 ****** 2026-02-17 05:27:41.045077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:27:41.045084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:27:41.045096 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:27:41.045103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:27:41.045118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:27:44.778206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:27:44.778345 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:27:44.778369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:27:44.778405 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:27:44.778475 | orchestrator | 2026-02-17 05:27:44.778491 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-17 05:27:44.778504 | orchestrator | Tuesday 17 February 2026 05:27:41 +0000 (0:00:01.889) 0:00:21.496 ****** 2026-02-17 05:27:44.778532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:44.778576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:44.778597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:44.778619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:44.778661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:44.778702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:58.533226 | orchestrator | 2026-02-17 05:27:58.533373 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-17 05:27:58.533718 | orchestrator | Tuesday 17 February 2026 05:27:44 +0000 (0:00:03.731) 0:00:25.228 ****** 2026-02-17 05:27:58.533745 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:58.533766 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:58.533786 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:58.533806 | orchestrator | 2026-02-17 05:27:58.533826 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-17 05:27:58.533846 | orchestrator | Tuesday 17 February 2026 05:27:48 +0000 (0:00:03.580) 0:00:28.809 ****** 2026-02-17 05:27:58.533865 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:27:58.533919 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:27:58.533939 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:27:58.533959 | orchestrator | 2026-02-17 05:27:58.533979 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-17 05:27:58.533999 | orchestrator | Tuesday 17 February 2026 05:27:51 +0000 (0:00:03.147) 0:00:31.956 ****** 2026-02-17 05:27:58.534109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:58.534155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:58.534195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-17 05:27:58.534237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:58.534266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:58.534284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-17 05:27:58.534297 | orchestrator | 2026-02-17 05:27:58.534309 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-17 05:27:58.534322 | orchestrator | Tuesday 17 February 2026 05:27:54 +0000 (0:00:03.503) 0:00:35.460 ****** 2026-02-17 05:27:58.534333 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:27:58.534346 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:27:58.534357 | orchestrator | } 2026-02-17 05:27:58.534369 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:27:58.534380 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:27:58.534433 | orchestrator | } 2026-02-17 05:27:58.534444 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:27:58.534454 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:27:58.534464 | orchestrator | } 2026-02-17 05:27:58.534474 | orchestrator | 2026-02-17 05:27:58.534484 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:27:58.534494 | orchestrator | Tuesday 17 February 2026 05:27:56 +0000 (0:00:01.476) 0:00:36.937 ****** 2026-02-17 05:27:58.534513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:31:02.637619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:31:02.637777 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:31:02.637796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:31:02.637832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:31:02.637872 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:31:02.637907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-17 05:31:02.637920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-17 05:31:02.637933 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:31:02.637944 | orchestrator | 2026-02-17 05:31:02.637957 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-17 05:31:02.637969 | orchestrator | Tuesday 17 February 2026 05:27:58 +0000 (0:00:02.048) 0:00:38.986 ****** 2026-02-17 05:31:02.637981 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:31:02.637992 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:31:02.638096 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:31:02.638109 | orchestrator | 2026-02-17 05:31:02.638122 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-17 05:31:02.638146 | orchestrator | Tuesday 17 February 2026 05:28:00 +0000 (0:00:01.594) 0:00:40.581 ****** 2026-02-17 05:31:02.638159 | orchestrator | 2026-02-17 05:31:02.638172 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-17 05:31:02.638185 | orchestrator | Tuesday 17 February 2026 05:28:00 +0000 (0:00:00.514) 0:00:41.095 ****** 2026-02-17 05:31:02.638197 | orchestrator | 2026-02-17 05:31:02.638211 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-17 05:31:02.638223 | orchestrator | Tuesday 17 February 2026 05:28:01 +0000 (0:00:00.450) 0:00:41.546 ****** 2026-02-17 05:31:02.638235 | orchestrator | 2026-02-17 05:31:02.638248 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-17 05:31:02.638267 | orchestrator | Tuesday 17 February 2026 05:28:01 +0000 (0:00:00.801) 0:00:42.347 ****** 2026-02-17 05:31:02.638280 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:31:02.638294 | orchestrator | 2026-02-17 05:31:02.638307 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-17 05:31:02.638320 | orchestrator | Tuesday 17 February 2026 05:28:05 +0000 (0:00:03.511) 0:00:45.859 ****** 2026-02-17 05:31:02.638332 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:31:02.638360 | orchestrator | 2026-02-17 05:31:02.638374 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-17 05:31:02.638387 | orchestrator | Tuesday 17 February 2026 05:28:09 +0000 (0:00:04.145) 0:00:50.005 ****** 2026-02-17 05:31:02.638403 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:31:02.638423 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:31:02.638442 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:31:02.638461 | orchestrator | 2026-02-17 05:31:02.638479 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-17 05:31:02.638497 | orchestrator | Tuesday 17 February 2026 05:29:19 +0000 (0:01:09.889) 0:01:59.894 ****** 2026-02-17 05:31:02.638514 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:31:02.638532 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:31:02.638551 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:31:02.638570 | orchestrator | 2026-02-17 05:31:02.638588 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-17 05:31:02.638607 | orchestrator | Tuesday 17 February 2026 05:30:52 +0000 (0:01:33.453) 0:03:33.348 ****** 2026-02-17 05:31:02.638628 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:31:02.638646 | orchestrator | 2026-02-17 05:31:02.638662 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-17 05:31:02.638673 | orchestrator | Tuesday 17 February 2026 05:30:54 +0000 (0:00:01.870) 0:03:35.218 ****** 2026-02-17 05:31:02.638684 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:31:02.638695 | orchestrator | 2026-02-17 05:31:02.638706 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-17 05:31:02.638717 | orchestrator | Tuesday 17 February 2026 05:30:58 +0000 (0:00:03.365) 0:03:38.584 ****** 2026-02-17 05:31:02.638728 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:31:02.638738 | orchestrator | 2026-02-17 05:31:02.638749 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-17 05:31:02.638760 | orchestrator | Tuesday 17 February 2026 05:31:01 +0000 (0:00:03.301) 0:03:41.886 ****** 2026-02-17 05:31:02.638771 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:31:02.638782 | orchestrator | 2026-02-17 05:31:02.638793 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-17 05:31:02.638815 | orchestrator | Tuesday 17 February 2026 05:31:02 +0000 (0:00:01.199) 0:03:43.085 ****** 2026-02-17 05:31:04.979499 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:31:04.979621 | orchestrator | 2026-02-17 05:31:04.979651 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:31:04.979673 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:31:04.979687 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 05:31:04.979698 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 05:31:04.979709 | orchestrator | 2026-02-17 05:31:04.979721 | orchestrator | 2026-02-17 05:31:04.979732 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:31:04.979743 | orchestrator | Tuesday 17 February 2026 05:31:04 +0000 (0:00:01.964) 0:03:45.050 ****** 2026-02-17 05:31:04.979754 | orchestrator | =============================================================================== 2026-02-17 05:31:04.979764 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 93.45s 2026-02-17 05:31:04.979775 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.89s 2026-02-17 05:31:04.979786 | orchestrator | opensearch : Perform a flush -------------------------------------------- 4.15s 2026-02-17 05:31:04.979797 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.73s 2026-02-17 05:31:04.979837 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.58s 2026-02-17 05:31:04.979849 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.51s 2026-02-17 05:31:04.979860 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.50s 2026-02-17 05:31:04.979871 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.50s 2026-02-17 05:31:04.979882 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.37s 2026-02-17 05:31:04.979894 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.30s 2026-02-17 05:31:04.979913 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.15s 2026-02-17 05:31:04.979933 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.44s 2026-02-17 05:31:04.979951 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.38s 2026-02-17 05:31:04.979970 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.12s 2026-02-17 05:31:04.979989 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.05s 2026-02-17 05:31:04.980057 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.03s 2026-02-17 05:31:04.980077 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.96s 2026-02-17 05:31:04.980107 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.94s 2026-02-17 05:31:04.980121 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.89s 2026-02-17 05:31:04.980134 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.87s 2026-02-17 05:31:05.322244 | orchestrator | + osism apply -a upgrade memcached 2026-02-17 05:31:07.386416 | orchestrator | 2026-02-17 05:31:07 | INFO  | Task dbb5eb63-35aa-4d01-a899-68983847f93e (memcached) was prepared for execution. 2026-02-17 05:31:07.386516 | orchestrator | 2026-02-17 05:31:07 | INFO  | It takes a moment until task dbb5eb63-35aa-4d01-a899-68983847f93e (memcached) has been started and output is visible here. 2026-02-17 05:31:31.259779 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-17 05:31:31.259894 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-17 05:31:31.259924 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-17 05:31:31.259936 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-17 05:31:31.259961 | orchestrator | 2026-02-17 05:31:31.259974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 05:31:31.260065 | orchestrator | 2026-02-17 05:31:31.260079 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 05:31:31.260091 | orchestrator | Tuesday 17 February 2026 05:31:13 +0000 (0:00:01.412) 0:00:01.412 ****** 2026-02-17 05:31:31.260103 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:31:31.260116 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:31:31.260127 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:31:31.260139 | orchestrator | 2026-02-17 05:31:31.260151 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 05:31:31.260164 | orchestrator | Tuesday 17 February 2026 05:31:13 +0000 (0:00:00.670) 0:00:02.083 ****** 2026-02-17 05:31:31.260176 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-17 05:31:31.260188 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-17 05:31:31.260201 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-17 05:31:31.260213 | orchestrator | 2026-02-17 05:31:31.260224 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-17 05:31:31.260260 | orchestrator | 2026-02-17 05:31:31.260272 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-17 05:31:31.260284 | orchestrator | Tuesday 17 February 2026 05:31:14 +0000 (0:00:01.059) 0:00:03.143 ****** 2026-02-17 05:31:31.260295 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:31:31.260307 | orchestrator | 2026-02-17 05:31:31.260321 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-17 05:31:31.260334 | orchestrator | Tuesday 17 February 2026 05:31:15 +0000 (0:00:01.000) 0:00:04.143 ****** 2026-02-17 05:31:31.260347 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-17 05:31:31.260361 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-17 05:31:31.260373 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-17 05:31:31.260386 | orchestrator | 2026-02-17 05:31:31.260399 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-17 05:31:31.260412 | orchestrator | Tuesday 17 February 2026 05:31:16 +0000 (0:00:00.889) 0:00:05.032 ****** 2026-02-17 05:31:31.260425 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-17 05:31:31.260437 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-17 05:31:31.260451 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-17 05:31:31.260463 | orchestrator | 2026-02-17 05:31:31.260477 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-17 05:31:31.260490 | orchestrator | Tuesday 17 February 2026 05:31:18 +0000 (0:00:01.736) 0:00:06.769 ****** 2026-02-17 05:31:31.260507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 05:31:31.260540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 05:31:31.260574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-17 05:31:31.260589 | orchestrator | 2026-02-17 05:31:31.260602 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-17 05:31:31.260624 | orchestrator | Tuesday 17 February 2026 05:31:19 +0000 (0:00:01.280) 0:00:08.050 ****** 2026-02-17 05:31:31.260637 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:31:31.260650 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:31:31.260663 | orchestrator | } 2026-02-17 05:31:31.260676 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:31:31.260688 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:31:31.260699 | orchestrator | } 2026-02-17 05:31:31.260710 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:31:31.260722 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:31:31.260733 | orchestrator | } 2026-02-17 05:31:31.260745 | orchestrator | 2026-02-17 05:31:31.260757 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:31:31.260768 | orchestrator | Tuesday 17 February 2026 05:31:20 +0000 (0:00:00.374) 0:00:08.425 ****** 2026-02-17 05:31:31.260780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 05:31:31.260793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 05:31:31.260805 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-17 05:31:31.260817 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-17 05:31:31.260839 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:31:31.260851 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:31:31.260869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-17 05:31:31.260882 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:31:31.260893 | orchestrator | 2026-02-17 05:31:31.260904 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-17 05:31:31.260916 | orchestrator | Tuesday 17 February 2026 05:31:21 +0000 (0:00:01.318) 0:00:09.744 ****** 2026-02-17 05:31:31.260934 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:31:31.260946 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:31:31.260964 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:31:31.605665 | orchestrator | 2026-02-17 05:31:31.605744 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:31:31.605755 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:31:31.605764 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:31:31.605770 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:31:31.605777 | orchestrator | 2026-02-17 05:31:31.605783 | orchestrator | 2026-02-17 05:31:31.605790 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:31:31.605796 | orchestrator | Tuesday 17 February 2026 05:31:31 +0000 (0:00:09.879) 0:00:19.624 ****** 2026-02-17 05:31:31.605803 | orchestrator | =============================================================================== 2026-02-17 05:31:31.605809 | orchestrator | memcached : Restart memcached container --------------------------------- 9.88s 2026-02-17 05:31:31.605815 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.74s 2026-02-17 05:31:31.605821 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.32s 2026-02-17 05:31:31.605828 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.28s 2026-02-17 05:31:31.605834 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2026-02-17 05:31:31.605840 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.00s 2026-02-17 05:31:31.605846 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.89s 2026-02-17 05:31:31.605853 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2026-02-17 05:31:31.605859 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.37s 2026-02-17 05:31:31.935388 | orchestrator | + osism apply -a upgrade redis 2026-02-17 05:31:33.977279 | orchestrator | 2026-02-17 05:31:33 | INFO  | Task a2cf99e5-5cb1-4a8a-b9b4-da428921c5b3 (redis) was prepared for execution. 2026-02-17 05:31:33.977424 | orchestrator | 2026-02-17 05:31:33 | INFO  | It takes a moment until task a2cf99e5-5cb1-4a8a-b9b4-da428921c5b3 (redis) has been started and output is visible here. 2026-02-17 05:31:45.570313 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-17 05:31:45.570423 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-17 05:31:45.570452 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-17 05:31:45.570463 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-17 05:31:45.570486 | orchestrator | 2026-02-17 05:31:45.570499 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 05:31:45.570510 | orchestrator | 2026-02-17 05:31:45.570522 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 05:31:45.570533 | orchestrator | Tuesday 17 February 2026 05:31:39 +0000 (0:00:01.012) 0:00:01.012 ****** 2026-02-17 05:31:45.570544 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:31:45.570557 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:31:45.570568 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:31:45.570580 | orchestrator | 2026-02-17 05:31:45.570591 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 05:31:45.570629 | orchestrator | Tuesday 17 February 2026 05:31:40 +0000 (0:00:00.836) 0:00:01.848 ****** 2026-02-17 05:31:45.570648 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-17 05:31:45.570667 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-17 05:31:45.570684 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-17 05:31:45.570701 | orchestrator | 2026-02-17 05:31:45.570718 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-17 05:31:45.570735 | orchestrator | 2026-02-17 05:31:45.570753 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-17 05:31:45.570771 | orchestrator | Tuesday 17 February 2026 05:31:41 +0000 (0:00:00.819) 0:00:02.668 ****** 2026-02-17 05:31:45.570790 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:31:45.570809 | orchestrator | 2026-02-17 05:31:45.570843 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-17 05:31:45.570865 | orchestrator | Tuesday 17 February 2026 05:31:42 +0000 (0:00:01.015) 0:00:03.684 ****** 2026-02-17 05:31:45.570891 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.570920 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.570943 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.571025 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.571078 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.571116 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.571138 | orchestrator | 2026-02-17 05:31:45.571160 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-17 05:31:45.571180 | orchestrator | Tuesday 17 February 2026 05:31:43 +0000 (0:00:01.393) 0:00:05.077 ****** 2026-02-17 05:31:45.571208 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.571228 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.571247 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.571266 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:45.571301 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.469930 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470147 | orchestrator | 2026-02-17 05:31:50.470164 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-17 05:31:50.470173 | orchestrator | Tuesday 17 February 2026 05:31:45 +0000 (0:00:02.103) 0:00:07.181 ****** 2026-02-17 05:31:50.470198 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470216 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470224 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470278 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470291 | orchestrator | 2026-02-17 05:31:50.470303 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-17 05:31:50.470316 | orchestrator | Tuesday 17 February 2026 05:31:48 +0000 (0:00:02.838) 0:00:10.020 ****** 2026-02-17 05:31:50.470325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:31:50.470409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-17 05:32:13.255315 | orchestrator | 2026-02-17 05:32:13.255433 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-17 05:32:13.255453 | orchestrator | Tuesday 17 February 2026 05:31:50 +0000 (0:00:02.057) 0:00:12.077 ****** 2026-02-17 05:32:13.255466 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:32:13.255480 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:32:13.255491 | orchestrator | } 2026-02-17 05:32:13.255503 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:32:13.255514 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:32:13.255525 | orchestrator | } 2026-02-17 05:32:13.255537 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:32:13.255564 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:32:13.255575 | orchestrator | } 2026-02-17 05:32:13.255587 | orchestrator | 2026-02-17 05:32:13.255598 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:32:13.255610 | orchestrator | Tuesday 17 February 2026 05:31:51 +0000 (0:00:00.592) 0:00:12.670 ****** 2026-02-17 05:32:13.255623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-17 05:32:13.255638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-17 05:32:13.255650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-17 05:32:13.255692 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-17 05:32:13.255704 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-17 05:32:13.255727 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:32:13.255738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-17 05:32:13.255750 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:32:13.255781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-17 05:32:13.255799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-17 05:32:13.255811 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:32:13.255822 | orchestrator | 2026-02-17 05:32:13.255834 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-17 05:32:13.255848 | orchestrator | Tuesday 17 February 2026 05:31:52 +0000 (0:00:01.052) 0:00:13.723 ****** 2026-02-17 05:32:13.255861 | orchestrator | 2026-02-17 05:32:13.255875 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-17 05:32:13.255887 | orchestrator | Tuesday 17 February 2026 05:31:52 +0000 (0:00:00.078) 0:00:13.802 ****** 2026-02-17 05:32:13.255900 | orchestrator | 2026-02-17 05:32:13.255913 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-17 05:32:13.255958 | orchestrator | Tuesday 17 February 2026 05:31:52 +0000 (0:00:00.072) 0:00:13.874 ****** 2026-02-17 05:32:13.255978 | orchestrator | 2026-02-17 05:32:13.255999 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-17 05:32:13.256032 | orchestrator | Tuesday 17 February 2026 05:31:52 +0000 (0:00:00.071) 0:00:13.946 ****** 2026-02-17 05:32:13.256046 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:32:13.256058 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:32:13.256071 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:32:13.256083 | orchestrator | 2026-02-17 05:32:13.256095 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-17 05:32:13.256108 | orchestrator | Tuesday 17 February 2026 05:32:02 +0000 (0:00:09.910) 0:00:23.857 ****** 2026-02-17 05:32:13.256120 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:32:13.256133 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:32:13.256146 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:32:13.256157 | orchestrator | 2026-02-17 05:32:13.256170 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:32:13.256183 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:32:13.256197 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:32:13.256208 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:32:13.256219 | orchestrator | 2026-02-17 05:32:13.256229 | orchestrator | 2026-02-17 05:32:13.256241 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:32:13.256252 | orchestrator | Tuesday 17 February 2026 05:32:12 +0000 (0:00:10.603) 0:00:34.460 ****** 2026-02-17 05:32:13.256262 | orchestrator | =============================================================================== 2026-02-17 05:32:13.256273 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.60s 2026-02-17 05:32:13.256284 | orchestrator | redis : Restart redis container ----------------------------------------- 9.91s 2026-02-17 05:32:13.256294 | orchestrator | redis : Copying over redis config files --------------------------------- 2.84s 2026-02-17 05:32:13.256305 | orchestrator | redis : Copying over default config.json files -------------------------- 2.10s 2026-02-17 05:32:13.256316 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.06s 2026-02-17 05:32:13.256326 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.39s 2026-02-17 05:32:13.256337 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.05s 2026-02-17 05:32:13.256348 | orchestrator | redis : include_tasks --------------------------------------------------- 1.02s 2026-02-17 05:32:13.256358 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-02-17 05:32:13.256369 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-02-17 05:32:13.256380 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.59s 2026-02-17 05:32:13.256391 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-02-17 05:32:13.568660 | orchestrator | + osism apply -a upgrade mariadb 2026-02-17 05:32:15.804678 | orchestrator | 2026-02-17 05:32:15 | INFO  | Task 4df0a7c6-01ee-440d-8a90-f0975aa2eb48 (mariadb) was prepared for execution. 2026-02-17 05:32:15.804774 | orchestrator | 2026-02-17 05:32:15 | INFO  | It takes a moment until task 4df0a7c6-01ee-440d-8a90-f0975aa2eb48 (mariadb) has been started and output is visible here. 2026-02-17 05:32:42.597556 | orchestrator | 2026-02-17 05:32:42.597666 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 05:32:42.597682 | orchestrator | 2026-02-17 05:32:42.597693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 05:32:42.597703 | orchestrator | Tuesday 17 February 2026 05:32:21 +0000 (0:00:01.530) 0:00:01.530 ****** 2026-02-17 05:32:42.597715 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:32:42.597726 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:32:42.597758 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:32:42.597773 | orchestrator | 2026-02-17 05:32:42.597790 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 05:32:42.597806 | orchestrator | Tuesday 17 February 2026 05:32:23 +0000 (0:00:02.225) 0:00:03.756 ****** 2026-02-17 05:32:42.597821 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-17 05:32:42.597855 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-17 05:32:42.597874 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-17 05:32:42.597949 | orchestrator | 2026-02-17 05:32:42.597960 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-17 05:32:42.597969 | orchestrator | 2026-02-17 05:32:42.597979 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-17 05:32:42.597992 | orchestrator | Tuesday 17 February 2026 05:32:26 +0000 (0:00:02.730) 0:00:06.486 ****** 2026-02-17 05:32:42.598009 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:32:42.598107 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 05:32:42.598124 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 05:32:42.598135 | orchestrator | 2026-02-17 05:32:42.598146 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 05:32:42.598157 | orchestrator | Tuesday 17 February 2026 05:32:28 +0000 (0:00:01.467) 0:00:07.953 ****** 2026-02-17 05:32:42.598169 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:32:42.598181 | orchestrator | 2026-02-17 05:32:42.598192 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-17 05:32:42.598203 | orchestrator | Tuesday 17 February 2026 05:32:29 +0000 (0:00:01.723) 0:00:09.677 ****** 2026-02-17 05:32:42.598221 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:32:42.598274 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:32:42.598309 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:32:42.598327 | orchestrator | 2026-02-17 05:32:42.598343 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-17 05:32:42.598360 | orchestrator | Tuesday 17 February 2026 05:32:33 +0000 (0:00:04.024) 0:00:13.702 ****** 2026-02-17 05:32:42.598375 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:32:42.598393 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:32:42.598410 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:32:42.598426 | orchestrator | 2026-02-17 05:32:42.598442 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-17 05:32:42.598458 | orchestrator | Tuesday 17 February 2026 05:32:35 +0000 (0:00:01.615) 0:00:15.318 ****** 2026-02-17 05:32:42.598487 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:32:42.598503 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:32:42.598518 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:32:42.598533 | orchestrator | 2026-02-17 05:32:42.598548 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-17 05:32:42.598563 | orchestrator | Tuesday 17 February 2026 05:32:37 +0000 (0:00:02.324) 0:00:17.643 ****** 2026-02-17 05:32:42.598602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:32:54.901386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:32:54.901570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:32:54.901599 | orchestrator | 2026-02-17 05:32:54.901616 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-17 05:32:54.901631 | orchestrator | Tuesday 17 February 2026 05:32:42 +0000 (0:00:04.846) 0:00:22.490 ****** 2026-02-17 05:32:54.901646 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:32:54.901663 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:32:54.901677 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:32:54.901691 | orchestrator | 2026-02-17 05:32:54.901701 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-17 05:32:54.901732 | orchestrator | Tuesday 17 February 2026 05:32:44 +0000 (0:00:02.064) 0:00:24.555 ****** 2026-02-17 05:32:54.901747 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:32:54.901762 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:32:54.901775 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:32:54.901790 | orchestrator | 2026-02-17 05:32:54.901804 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 05:32:54.901819 | orchestrator | Tuesday 17 February 2026 05:32:49 +0000 (0:00:04.830) 0:00:29.385 ****** 2026-02-17 05:32:54.901835 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:32:54.901849 | orchestrator | 2026-02-17 05:32:54.901864 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-17 05:32:54.901935 | orchestrator | Tuesday 17 February 2026 05:32:51 +0000 (0:00:01.953) 0:00:31.338 ****** 2026-02-17 05:32:54.901954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:32:54.901986 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:32:54.902097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:03.045582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:03.045734 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:03.045753 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:03.045765 | orchestrator | 2026-02-17 05:33:03.045777 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-17 05:33:03.045789 | orchestrator | Tuesday 17 February 2026 05:32:54 +0000 (0:00:03.458) 0:00:34.796 ****** 2026-02-17 05:33:03.045813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:03.045826 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:03.045916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:03.045944 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:03.045962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:03.045975 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:03.045986 | orchestrator | 2026-02-17 05:33:03.045998 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-17 05:33:03.046009 | orchestrator | Tuesday 17 February 2026 05:32:58 +0000 (0:00:03.655) 0:00:38.452 ****** 2026-02-17 05:33:03.046098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:07.387343 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:07.387471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:07.387492 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:07.387506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:07.387543 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:07.387556 | orchestrator | 2026-02-17 05:33:07.387568 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-17 05:33:07.387580 | orchestrator | Tuesday 17 February 2026 05:33:03 +0000 (0:00:04.495) 0:00:42.947 ****** 2026-02-17 05:33:07.387617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:33:07.387632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:33:07.387662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-17 05:33:22.522765 | orchestrator | 2026-02-17 05:33:22.522899 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-17 05:33:22.522916 | orchestrator | Tuesday 17 February 2026 05:33:07 +0000 (0:00:04.336) 0:00:47.284 ****** 2026-02-17 05:33:22.522928 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:33:22.522940 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:33:22.522949 | orchestrator | } 2026-02-17 05:33:22.522976 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:33:22.522986 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:33:22.522995 | orchestrator | } 2026-02-17 05:33:22.523005 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:33:22.523015 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:33:22.523026 | orchestrator | } 2026-02-17 05:33:22.523036 | orchestrator | 2026-02-17 05:33:22.523045 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:33:22.523055 | orchestrator | Tuesday 17 February 2026 05:33:08 +0000 (0:00:01.420) 0:00:48.705 ****** 2026-02-17 05:33:22.523069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:22.523100 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:22.523130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:22.523149 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:22.523160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:22.523177 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:22.523187 | orchestrator | 2026-02-17 05:33:22.523197 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-17 05:33:22.523207 | orchestrator | Tuesday 17 February 2026 05:33:12 +0000 (0:00:04.017) 0:00:52.722 ****** 2026-02-17 05:33:22.523215 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:22.523225 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:22.523234 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:22.523243 | orchestrator | 2026-02-17 05:33:22.523252 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-17 05:33:22.523262 | orchestrator | Tuesday 17 February 2026 05:33:14 +0000 (0:00:01.371) 0:00:54.094 ****** 2026-02-17 05:33:22.523270 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:22.523279 | orchestrator | 2026-02-17 05:33:22.523288 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-17 05:33:22.523296 | orchestrator | Tuesday 17 February 2026 05:33:15 +0000 (0:00:01.220) 0:00:55.315 ****** 2026-02-17 05:33:22.523306 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:22.523316 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:22.523325 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:22.523334 | orchestrator | 2026-02-17 05:33:22.523344 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-17 05:33:22.523354 | orchestrator | Tuesday 17 February 2026 05:33:16 +0000 (0:00:01.370) 0:00:56.685 ****** 2026-02-17 05:33:22.523364 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:22.523374 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:22.523384 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:22.523395 | orchestrator | 2026-02-17 05:33:22.523405 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-17 05:33:22.523415 | orchestrator | Tuesday 17 February 2026 05:33:18 +0000 (0:00:01.592) 0:00:58.278 ****** 2026-02-17 05:33:22.523424 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:22.523433 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:22.523442 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:22.523452 | orchestrator | 2026-02-17 05:33:22.523462 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-17 05:33:22.523472 | orchestrator | Tuesday 17 February 2026 05:33:19 +0000 (0:00:01.393) 0:00:59.672 ****** 2026-02-17 05:33:22.523482 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:22.523492 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:22.523501 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:22.523511 | orchestrator | 2026-02-17 05:33:22.523521 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-17 05:33:22.523536 | orchestrator | Tuesday 17 February 2026 05:33:21 +0000 (0:00:01.389) 0:01:01.061 ****** 2026-02-17 05:33:22.523541 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:22.523547 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:22.523553 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:22.523559 | orchestrator | 2026-02-17 05:33:22.523575 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-17 05:33:40.484492 | orchestrator | Tuesday 17 February 2026 05:33:22 +0000 (0:00:01.357) 0:01:02.419 ****** 2026-02-17 05:33:40.484622 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.484646 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.484662 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.484677 | orchestrator | 2026-02-17 05:33:40.484712 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-17 05:33:40.484728 | orchestrator | Tuesday 17 February 2026 05:33:24 +0000 (0:00:01.678) 0:01:04.097 ****** 2026-02-17 05:33:40.484742 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 05:33:40.484757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 05:33:40.484771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 05:33:40.484786 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.484800 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 05:33:40.484902 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 05:33:40.484921 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 05:33:40.484937 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.484952 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 05:33:40.484965 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 05:33:40.484974 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 05:33:40.484983 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.484991 | orchestrator | 2026-02-17 05:33:40.485001 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-17 05:33:40.485012 | orchestrator | Tuesday 17 February 2026 05:33:25 +0000 (0:00:01.393) 0:01:05.490 ****** 2026-02-17 05:33:40.485022 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485032 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485042 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.485051 | orchestrator | 2026-02-17 05:33:40.485062 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-17 05:33:40.485072 | orchestrator | Tuesday 17 February 2026 05:33:26 +0000 (0:00:01.338) 0:01:06.829 ****** 2026-02-17 05:33:40.485081 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485091 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485101 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.485111 | orchestrator | 2026-02-17 05:33:40.485121 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-17 05:33:40.485131 | orchestrator | Tuesday 17 February 2026 05:33:28 +0000 (0:00:01.356) 0:01:08.185 ****** 2026-02-17 05:33:40.485143 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485158 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485172 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.485282 | orchestrator | 2026-02-17 05:33:40.485299 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-17 05:33:40.485316 | orchestrator | Tuesday 17 February 2026 05:33:29 +0000 (0:00:01.337) 0:01:09.523 ****** 2026-02-17 05:33:40.485331 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485347 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485362 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.485376 | orchestrator | 2026-02-17 05:33:40.485390 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-17 05:33:40.485405 | orchestrator | Tuesday 17 February 2026 05:33:30 +0000 (0:00:01.369) 0:01:10.892 ****** 2026-02-17 05:33:40.485448 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485463 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485478 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.485492 | orchestrator | 2026-02-17 05:33:40.485506 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-17 05:33:40.485521 | orchestrator | Tuesday 17 February 2026 05:33:32 +0000 (0:00:01.363) 0:01:12.256 ****** 2026-02-17 05:33:40.485536 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485551 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485566 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.485579 | orchestrator | 2026-02-17 05:33:40.485594 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-17 05:33:40.485609 | orchestrator | Tuesday 17 February 2026 05:33:33 +0000 (0:00:01.634) 0:01:13.891 ****** 2026-02-17 05:33:40.485624 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485639 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485652 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.485668 | orchestrator | 2026-02-17 05:33:40.485683 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-17 05:33:40.485698 | orchestrator | Tuesday 17 February 2026 05:33:35 +0000 (0:00:01.375) 0:01:15.266 ****** 2026-02-17 05:33:40.485712 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485727 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485742 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:40.485757 | orchestrator | 2026-02-17 05:33:40.485772 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-17 05:33:40.485787 | orchestrator | Tuesday 17 February 2026 05:33:36 +0000 (0:00:01.403) 0:01:16.670 ****** 2026-02-17 05:33:40.485881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:40.485902 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:40.485919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:40.485947 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:40.485978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:57.282398 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:57.282555 | orchestrator | 2026-02-17 05:33:57.282579 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-17 05:33:57.282594 | orchestrator | Tuesday 17 February 2026 05:33:40 +0000 (0:00:03.705) 0:01:20.376 ****** 2026-02-17 05:33:57.282605 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:57.282617 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:57.282628 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:57.282639 | orchestrator | 2026-02-17 05:33:57.282651 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-17 05:33:57.282695 | orchestrator | Tuesday 17 February 2026 05:33:42 +0000 (0:00:01.553) 0:01:21.929 ****** 2026-02-17 05:33:57.282712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:57.282729 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:57.282779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:57.282827 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:57.282842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-17 05:33:57.282864 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:57.282876 | orchestrator | 2026-02-17 05:33:57.282887 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-17 05:33:57.282898 | orchestrator | Tuesday 17 February 2026 05:33:45 +0000 (0:00:03.511) 0:01:25.441 ****** 2026-02-17 05:33:57.282908 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:57.282919 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:57.282930 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:57.282941 | orchestrator | 2026-02-17 05:33:57.282952 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-17 05:33:57.282963 | orchestrator | Tuesday 17 February 2026 05:33:47 +0000 (0:00:01.727) 0:01:27.169 ****** 2026-02-17 05:33:57.282974 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:57.282985 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:57.282996 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:57.283006 | orchestrator | 2026-02-17 05:33:57.283017 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-17 05:33:57.283028 | orchestrator | Tuesday 17 February 2026 05:33:48 +0000 (0:00:01.419) 0:01:28.588 ****** 2026-02-17 05:33:57.283039 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:57.283050 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:57.283061 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:57.283071 | orchestrator | 2026-02-17 05:33:57.283082 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-17 05:33:57.283092 | orchestrator | Tuesday 17 February 2026 05:33:50 +0000 (0:00:01.358) 0:01:29.947 ****** 2026-02-17 05:33:57.283103 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:57.283114 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:57.283125 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:57.283135 | orchestrator | 2026-02-17 05:33:57.283146 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-17 05:33:57.283157 | orchestrator | Tuesday 17 February 2026 05:33:51 +0000 (0:00:01.775) 0:01:31.722 ****** 2026-02-17 05:33:57.283167 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:33:57.283178 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:33:57.283189 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:33:57.283207 | orchestrator | 2026-02-17 05:33:57.283218 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-17 05:33:57.283234 | orchestrator | Tuesday 17 February 2026 05:33:53 +0000 (0:00:01.898) 0:01:33.621 ****** 2026-02-17 05:33:57.283245 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:33:57.283257 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:33:57.283268 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:33:57.283279 | orchestrator | 2026-02-17 05:33:57.283290 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-17 05:33:57.283301 | orchestrator | Tuesday 17 February 2026 05:33:55 +0000 (0:00:01.916) 0:01:35.538 ****** 2026-02-17 05:33:57.283312 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:33:57.283323 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:33:57.283334 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:33:57.283344 | orchestrator | 2026-02-17 05:33:57.283355 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-17 05:33:57.283366 | orchestrator | Tuesday 17 February 2026 05:33:57 +0000 (0:00:01.428) 0:01:36.967 ****** 2026-02-17 05:33:57.283385 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218231 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.218313 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.218320 | orchestrator | 2026-02-17 05:36:41.218326 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-17 05:36:41.218331 | orchestrator | Tuesday 17 February 2026 05:33:58 +0000 (0:00:01.410) 0:01:38.377 ****** 2026-02-17 05:36:41.218336 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.218340 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218345 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.218349 | orchestrator | 2026-02-17 05:36:41.218353 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-17 05:36:41.218358 | orchestrator | Tuesday 17 February 2026 05:34:00 +0000 (0:00:02.097) 0:01:40.474 ****** 2026-02-17 05:36:41.218362 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218366 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.218370 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.218375 | orchestrator | 2026-02-17 05:36:41.218379 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-17 05:36:41.218383 | orchestrator | Tuesday 17 February 2026 05:34:01 +0000 (0:00:01.381) 0:01:41.856 ****** 2026-02-17 05:36:41.218388 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:36:41.218393 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.218397 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.218401 | orchestrator | 2026-02-17 05:36:41.218405 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-17 05:36:41.218409 | orchestrator | Tuesday 17 February 2026 05:34:03 +0000 (0:00:01.371) 0:01:43.228 ****** 2026-02-17 05:36:41.218414 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218418 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.218422 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.218426 | orchestrator | 2026-02-17 05:36:41.218430 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-17 05:36:41.218434 | orchestrator | Tuesday 17 February 2026 05:34:06 +0000 (0:00:03.566) 0:01:46.794 ****** 2026-02-17 05:36:41.218439 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218443 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.218447 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.218451 | orchestrator | 2026-02-17 05:36:41.218455 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-17 05:36:41.218459 | orchestrator | Tuesday 17 February 2026 05:34:08 +0000 (0:00:01.481) 0:01:48.276 ****** 2026-02-17 05:36:41.218463 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218467 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.218471 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.218476 | orchestrator | 2026-02-17 05:36:41.218480 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-17 05:36:41.218499 | orchestrator | Tuesday 17 February 2026 05:34:09 +0000 (0:00:01.453) 0:01:49.730 ****** 2026-02-17 05:36:41.218504 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:36:41.218508 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.218512 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.218516 | orchestrator | 2026-02-17 05:36:41.218521 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 05:36:41.218525 | orchestrator | Tuesday 17 February 2026 05:34:11 +0000 (0:00:01.772) 0:01:51.502 ****** 2026-02-17 05:36:41.218529 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:36:41.218533 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.218537 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.218542 | orchestrator | 2026-02-17 05:36:41.218546 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-17 05:36:41.218550 | orchestrator | Tuesday 17 February 2026 05:34:13 +0000 (0:00:01.614) 0:01:53.117 ****** 2026-02-17 05:36:41.218554 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:36:41.218558 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.218562 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.218566 | orchestrator | 2026-02-17 05:36:41.218571 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-17 05:36:41.218575 | orchestrator | Tuesday 17 February 2026 05:34:14 +0000 (0:00:01.602) 0:01:54.719 ****** 2026-02-17 05:36:41.218579 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:36:41.218583 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:36:41.218587 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:36:41.218591 | orchestrator | 2026-02-17 05:36:41.218595 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-17 05:36:41.218600 | orchestrator | Tuesday 17 February 2026 05:34:16 +0000 (0:00:01.617) 0:01:56.337 ****** 2026-02-17 05:36:41.218638 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:36:41.218643 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.218647 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.218651 | orchestrator | 2026-02-17 05:36:41.218655 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-17 05:36:41.218659 | orchestrator | 2026-02-17 05:36:41.218663 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-17 05:36:41.218668 | orchestrator | Tuesday 17 February 2026 05:34:18 +0000 (0:00:01.907) 0:01:58.245 ****** 2026-02-17 05:36:41.218672 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:36:41.218676 | orchestrator | 2026-02-17 05:36:41.218681 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-17 05:36:41.218685 | orchestrator | Tuesday 17 February 2026 05:34:44 +0000 (0:00:26.547) 0:02:24.792 ****** 2026-02-17 05:36:41.218700 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218704 | orchestrator | 2026-02-17 05:36:41.218708 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-17 05:36:41.218713 | orchestrator | Tuesday 17 February 2026 05:34:49 +0000 (0:00:04.744) 0:02:29.537 ****** 2026-02-17 05:36:41.218717 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218721 | orchestrator | 2026-02-17 05:36:41.218725 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-17 05:36:41.218729 | orchestrator | 2026-02-17 05:36:41.218733 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-17 05:36:41.218737 | orchestrator | Tuesday 17 February 2026 05:34:52 +0000 (0:00:02.994) 0:02:32.532 ****** 2026-02-17 05:36:41.218742 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:36:41.218746 | orchestrator | 2026-02-17 05:36:41.218750 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-17 05:36:41.218764 | orchestrator | Tuesday 17 February 2026 05:35:19 +0000 (0:00:27.183) 0:02:59.716 ****** 2026-02-17 05:36:41.218768 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-02-17 05:36:41.218773 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.218781 | orchestrator | 2026-02-17 05:36:41.218785 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-17 05:36:41.218790 | orchestrator | Tuesday 17 February 2026 05:35:27 +0000 (0:00:07.965) 0:03:07.681 ****** 2026-02-17 05:36:41.218794 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.218799 | orchestrator | 2026-02-17 05:36:41.218804 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-17 05:36:41.218809 | orchestrator | 2026-02-17 05:36:41.218814 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-17 05:36:41.218819 | orchestrator | Tuesday 17 February 2026 05:35:31 +0000 (0:00:03.549) 0:03:11.231 ****** 2026-02-17 05:36:41.218823 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:36:41.218828 | orchestrator | 2026-02-17 05:36:41.218833 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-17 05:36:41.218838 | orchestrator | Tuesday 17 February 2026 05:35:58 +0000 (0:00:27.120) 0:03:38.351 ****** 2026-02-17 05:36:41.218843 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-17 05:36:41.218847 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.218852 | orchestrator | 2026-02-17 05:36:41.218857 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-17 05:36:41.218862 | orchestrator | Tuesday 17 February 2026 05:36:06 +0000 (0:00:07.982) 0:03:46.334 ****** 2026-02-17 05:36:41.218867 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-17 05:36:41.218872 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-17 05:36:41.218877 | orchestrator | mariadb_bootstrap_restart 2026-02-17 05:36:41.218882 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.218886 | orchestrator | 2026-02-17 05:36:41.218890 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-17 05:36:41.218895 | orchestrator | skipping: no hosts matched 2026-02-17 05:36:41.218899 | orchestrator | 2026-02-17 05:36:41.218903 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-17 05:36:41.218907 | orchestrator | skipping: no hosts matched 2026-02-17 05:36:41.218911 | orchestrator | 2026-02-17 05:36:41.218915 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-17 05:36:41.218919 | orchestrator | 2026-02-17 05:36:41.218924 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-17 05:36:41.218928 | orchestrator | Tuesday 17 February 2026 05:36:10 +0000 (0:00:04.243) 0:03:50.578 ****** 2026-02-17 05:36:41.218932 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:36:41.218936 | orchestrator | 2026-02-17 05:36:41.218940 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-17 05:36:41.218944 | orchestrator | Tuesday 17 February 2026 05:36:12 +0000 (0:00:02.025) 0:03:52.603 ****** 2026-02-17 05:36:41.218948 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.218953 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.218957 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.218961 | orchestrator | 2026-02-17 05:36:41.218965 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-17 05:36:41.218969 | orchestrator | Tuesday 17 February 2026 05:36:15 +0000 (0:00:03.163) 0:03:55.766 ****** 2026-02-17 05:36:41.218974 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.218978 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.218982 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:36:41.218986 | orchestrator | 2026-02-17 05:36:41.218990 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-17 05:36:41.218994 | orchestrator | Tuesday 17 February 2026 05:36:19 +0000 (0:00:03.296) 0:03:59.063 ****** 2026-02-17 05:36:41.218998 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.219003 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.219007 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.219011 | orchestrator | 2026-02-17 05:36:41.219018 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-17 05:36:41.219023 | orchestrator | Tuesday 17 February 2026 05:36:22 +0000 (0:00:03.291) 0:04:02.354 ****** 2026-02-17 05:36:41.219027 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.219031 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.219035 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:36:41.219039 | orchestrator | 2026-02-17 05:36:41.219043 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-17 05:36:41.219048 | orchestrator | Tuesday 17 February 2026 05:36:25 +0000 (0:00:03.266) 0:04:05.621 ****** 2026-02-17 05:36:41.219052 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.219056 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.219060 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.219064 | orchestrator | 2026-02-17 05:36:41.219068 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-17 05:36:41.219072 | orchestrator | Tuesday 17 February 2026 05:36:32 +0000 (0:00:06.553) 0:04:12.176 ****** 2026-02-17 05:36:41.219076 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:36:41.219083 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.219087 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.219091 | orchestrator | 2026-02-17 05:36:41.219096 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-17 05:36:41.219100 | orchestrator | Tuesday 17 February 2026 05:36:36 +0000 (0:00:03.816) 0:04:15.992 ****** 2026-02-17 05:36:41.219104 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:36:41.219108 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:36:41.219112 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:36:41.219116 | orchestrator | 2026-02-17 05:36:41.219120 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-17 05:36:41.219125 | orchestrator | Tuesday 17 February 2026 05:36:37 +0000 (0:00:01.721) 0:04:17.713 ****** 2026-02-17 05:36:41.219129 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:36:41.219133 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:36:41.219137 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:36:41.219141 | orchestrator | 2026-02-17 05:36:41.219148 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-17 05:37:01.405193 | orchestrator | Tuesday 17 February 2026 05:36:41 +0000 (0:00:03.396) 0:04:21.110 ****** 2026-02-17 05:37:01.405288 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:37:01.405304 | orchestrator | 2026-02-17 05:37:01.405314 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-17 05:37:01.405322 | orchestrator | Tuesday 17 February 2026 05:36:43 +0000 (0:00:01.993) 0:04:23.104 ****** 2026-02-17 05:37:01.405329 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:37:01.405338 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:37:01.405345 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:37:01.405352 | orchestrator | 2026-02-17 05:37:01.405359 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:37:01.405367 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-17 05:37:01.405376 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-17 05:37:01.405383 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-17 05:37:01.405390 | orchestrator | 2026-02-17 05:37:01.405397 | orchestrator | 2026-02-17 05:37:01.405404 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:37:01.405411 | orchestrator | Tuesday 17 February 2026 05:37:00 +0000 (0:00:17.718) 0:04:40.823 ****** 2026-02-17 05:37:01.405418 | orchestrator | =============================================================================== 2026-02-17 05:37:01.405446 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 80.85s 2026-02-17 05:37:01.405454 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 20.69s 2026-02-17 05:37:01.405460 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.72s 2026-02-17 05:37:01.405467 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.79s 2026-02-17 05:37:01.405474 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.55s 2026-02-17 05:37:01.405481 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.85s 2026-02-17 05:37:01.405487 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.83s 2026-02-17 05:37:01.405494 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.50s 2026-02-17 05:37:01.405501 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.34s 2026-02-17 05:37:01.405507 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.02s 2026-02-17 05:37:01.405514 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.02s 2026-02-17 05:37:01.405521 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.82s 2026-02-17 05:37:01.405528 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.71s 2026-02-17 05:37:01.405535 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.66s 2026-02-17 05:37:01.405542 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.57s 2026-02-17 05:37:01.405549 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.51s 2026-02-17 05:37:01.405556 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.46s 2026-02-17 05:37:01.405563 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.40s 2026-02-17 05:37:01.405570 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.30s 2026-02-17 05:37:01.405577 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.29s 2026-02-17 05:37:01.767323 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-17 05:37:03.877973 | orchestrator | 2026-02-17 05:37:03 | INFO  | Task b8a685b2-dcdf-433f-989e-cdcbc320579a (rabbitmq) was prepared for execution. 2026-02-17 05:37:03.878138 | orchestrator | 2026-02-17 05:37:03 | INFO  | It takes a moment until task b8a685b2-dcdf-433f-989e-cdcbc320579a (rabbitmq) has been started and output is visible here. 2026-02-17 05:37:49.097260 | orchestrator | 2026-02-17 05:37:49.097377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 05:37:49.097395 | orchestrator | 2026-02-17 05:37:49.097408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 05:37:49.097436 | orchestrator | Tuesday 17 February 2026 05:37:09 +0000 (0:00:01.460) 0:00:01.460 ****** 2026-02-17 05:37:49.097448 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:37:49.097461 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:37:49.097472 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:37:49.097483 | orchestrator | 2026-02-17 05:37:49.097494 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 05:37:49.097505 | orchestrator | Tuesday 17 February 2026 05:37:11 +0000 (0:00:02.069) 0:00:03.529 ****** 2026-02-17 05:37:49.097516 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-17 05:37:49.097527 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-17 05:37:49.097538 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-17 05:37:49.097602 | orchestrator | 2026-02-17 05:37:49.097617 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-17 05:37:49.097628 | orchestrator | 2026-02-17 05:37:49.097639 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-17 05:37:49.097650 | orchestrator | Tuesday 17 February 2026 05:37:13 +0000 (0:00:02.234) 0:00:05.763 ****** 2026-02-17 05:37:49.097686 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:37:49.097699 | orchestrator | 2026-02-17 05:37:49.097710 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-17 05:37:49.097721 | orchestrator | Tuesday 17 February 2026 05:37:16 +0000 (0:00:02.832) 0:00:08.596 ****** 2026-02-17 05:37:49.097731 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:37:49.097742 | orchestrator | 2026-02-17 05:37:49.097753 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-17 05:37:49.097764 | orchestrator | Tuesday 17 February 2026 05:37:19 +0000 (0:00:02.516) 0:00:11.112 ****** 2026-02-17 05:37:49.097776 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:37:49.097787 | orchestrator | 2026-02-17 05:37:49.097800 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-17 05:37:49.097813 | orchestrator | Tuesday 17 February 2026 05:37:22 +0000 (0:00:03.486) 0:00:14.600 ****** 2026-02-17 05:37:49.097825 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:37:49.097838 | orchestrator | 2026-02-17 05:37:49.097851 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-17 05:37:49.097864 | orchestrator | Tuesday 17 February 2026 05:37:32 +0000 (0:00:10.095) 0:00:24.696 ****** 2026-02-17 05:37:49.097876 | orchestrator | ok: [testbed-node-0] => { 2026-02-17 05:37:49.097887 | orchestrator |  "changed": false, 2026-02-17 05:37:49.097898 | orchestrator |  "msg": "All assertions passed" 2026-02-17 05:37:49.097909 | orchestrator | } 2026-02-17 05:37:49.097921 | orchestrator | 2026-02-17 05:37:49.097932 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-17 05:37:49.097943 | orchestrator | Tuesday 17 February 2026 05:37:34 +0000 (0:00:01.388) 0:00:26.084 ****** 2026-02-17 05:37:49.097953 | orchestrator | ok: [testbed-node-0] => { 2026-02-17 05:37:49.097964 | orchestrator |  "changed": false, 2026-02-17 05:37:49.097975 | orchestrator |  "msg": "All assertions passed" 2026-02-17 05:37:49.097987 | orchestrator | } 2026-02-17 05:37:49.097998 | orchestrator | 2026-02-17 05:37:49.098009 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-17 05:37:49.098072 | orchestrator | Tuesday 17 February 2026 05:37:36 +0000 (0:00:01.694) 0:00:27.779 ****** 2026-02-17 05:37:49.098084 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:37:49.098095 | orchestrator | 2026-02-17 05:37:49.098106 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-17 05:37:49.098128 | orchestrator | Tuesday 17 February 2026 05:37:37 +0000 (0:00:01.729) 0:00:29.509 ****** 2026-02-17 05:37:49.098139 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:37:49.098150 | orchestrator | 2026-02-17 05:37:49.098161 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-17 05:37:49.098172 | orchestrator | Tuesday 17 February 2026 05:37:39 +0000 (0:00:02.141) 0:00:31.651 ****** 2026-02-17 05:37:49.098183 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:37:49.098194 | orchestrator | 2026-02-17 05:37:49.098205 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-17 05:37:49.098216 | orchestrator | Tuesday 17 February 2026 05:37:42 +0000 (0:00:02.892) 0:00:34.544 ****** 2026-02-17 05:37:49.098227 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:37:49.098238 | orchestrator | 2026-02-17 05:37:49.098249 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-17 05:37:49.098260 | orchestrator | Tuesday 17 February 2026 05:37:44 +0000 (0:00:01.930) 0:00:36.475 ****** 2026-02-17 05:37:49.098304 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:37:49.098330 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:37:49.098345 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:37:49.098357 | orchestrator | 2026-02-17 05:37:49.098368 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-17 05:37:49.098379 | orchestrator | Tuesday 17 February 2026 05:37:46 +0000 (0:00:01.844) 0:00:38.319 ****** 2026-02-17 05:37:49.098391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:37:49.098424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:38:08.940607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:38:08.940727 | orchestrator | 2026-02-17 05:38:08.940744 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-17 05:38:08.940758 | orchestrator | Tuesday 17 February 2026 05:37:49 +0000 (0:00:02.531) 0:00:40.851 ****** 2026-02-17 05:38:08.940770 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-17 05:38:08.940782 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-17 05:38:08.940793 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-17 05:38:08.940805 | orchestrator | 2026-02-17 05:38:08.940817 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-17 05:38:08.940828 | orchestrator | Tuesday 17 February 2026 05:37:51 +0000 (0:00:02.415) 0:00:43.266 ****** 2026-02-17 05:38:08.940839 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-17 05:38:08.940851 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-17 05:38:08.940862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-17 05:38:08.940873 | orchestrator | 2026-02-17 05:38:08.940884 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-17 05:38:08.940895 | orchestrator | Tuesday 17 February 2026 05:37:54 +0000 (0:00:03.113) 0:00:46.380 ****** 2026-02-17 05:38:08.940906 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-17 05:38:08.940917 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-17 05:38:08.940951 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-17 05:38:08.940963 | orchestrator | 2026-02-17 05:38:08.940974 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-17 05:38:08.940985 | orchestrator | Tuesday 17 February 2026 05:37:56 +0000 (0:00:02.338) 0:00:48.718 ****** 2026-02-17 05:38:08.940996 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-17 05:38:08.941006 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-17 05:38:08.941017 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-17 05:38:08.941028 | orchestrator | 2026-02-17 05:38:08.941039 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-17 05:38:08.941050 | orchestrator | Tuesday 17 February 2026 05:37:59 +0000 (0:00:02.403) 0:00:51.121 ****** 2026-02-17 05:38:08.941061 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-17 05:38:08.941072 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-17 05:38:08.941083 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-17 05:38:08.941094 | orchestrator | 2026-02-17 05:38:08.941105 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-17 05:38:08.941119 | orchestrator | Tuesday 17 February 2026 05:38:01 +0000 (0:00:02.481) 0:00:53.603 ****** 2026-02-17 05:38:08.941132 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-17 05:38:08.941160 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-17 05:38:08.941173 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-17 05:38:08.941186 | orchestrator | 2026-02-17 05:38:08.941199 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-17 05:38:08.941212 | orchestrator | Tuesday 17 February 2026 05:38:04 +0000 (0:00:02.717) 0:00:56.320 ****** 2026-02-17 05:38:08.941226 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:38:08.941238 | orchestrator | 2026-02-17 05:38:08.941268 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-17 05:38:08.941282 | orchestrator | Tuesday 17 February 2026 05:38:06 +0000 (0:00:01.765) 0:00:58.086 ****** 2026-02-17 05:38:08.941296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:38:08.941312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:38:08.941336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:38:08.941350 | orchestrator | 2026-02-17 05:38:08.941363 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-17 05:38:08.941376 | orchestrator | Tuesday 17 February 2026 05:38:08 +0000 (0:00:02.365) 0:01:00.451 ****** 2026-02-17 05:38:08.941403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:38:17.971701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:38:17.971839 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:38:17.971857 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:38:17.971871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:38:17.971884 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:38:17.971895 | orchestrator | 2026-02-17 05:38:17.971907 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-17 05:38:17.971919 | orchestrator | Tuesday 17 February 2026 05:38:10 +0000 (0:00:01.537) 0:01:01.989 ****** 2026-02-17 05:38:17.971946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:38:17.971996 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:38:17.972044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:38:17.972082 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:38:17.972102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:38:17.972121 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:38:17.972132 | orchestrator | 2026-02-17 05:38:17.972144 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-17 05:38:17.972155 | orchestrator | Tuesday 17 February 2026 05:38:12 +0000 (0:00:01.806) 0:01:03.796 ****** 2026-02-17 05:38:17.972166 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:38:17.972178 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:38:17.972189 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:38:17.972200 | orchestrator | 2026-02-17 05:38:17.972211 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-17 05:38:17.972222 | orchestrator | Tuesday 17 February 2026 05:38:15 +0000 (0:00:03.662) 0:01:07.459 ****** 2026-02-17 05:38:17.972248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:38:17.972282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:40:02.181018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-17 05:40:02.181177 | orchestrator | 2026-02-17 05:40:02.181199 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-17 05:40:02.181213 | orchestrator | Tuesday 17 February 2026 05:38:17 +0000 (0:00:02.268) 0:01:09.728 ****** 2026-02-17 05:40:02.181226 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:40:02.181238 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:02.181250 | orchestrator | } 2026-02-17 05:40:02.181262 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:40:02.181273 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:02.181284 | orchestrator | } 2026-02-17 05:40:02.181295 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:40:02.181306 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:02.181317 | orchestrator | } 2026-02-17 05:40:02.181329 | orchestrator | 2026-02-17 05:40:02.181341 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:40:02.181352 | orchestrator | Tuesday 17 February 2026 05:38:19 +0000 (0:00:01.409) 0:01:11.137 ****** 2026-02-17 05:40:02.181366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:40:02.181379 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:40:02.181409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:40:02.181445 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:40:02.181479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-17 05:40:02.181529 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:40:02.181543 | orchestrator | 2026-02-17 05:40:02.181556 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-17 05:40:02.181569 | orchestrator | Tuesday 17 February 2026 05:38:21 +0000 (0:00:02.103) 0:01:13.241 ****** 2026-02-17 05:40:02.181581 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:40:02.181593 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:40:02.181605 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:40:02.181619 | orchestrator | 2026-02-17 05:40:02.181632 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-17 05:40:02.181644 | orchestrator | 2026-02-17 05:40:02.181656 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-17 05:40:02.181669 | orchestrator | Tuesday 17 February 2026 05:38:23 +0000 (0:00:02.095) 0:01:15.336 ****** 2026-02-17 05:40:02.181682 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:40:02.181696 | orchestrator | 2026-02-17 05:40:02.181710 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-17 05:40:02.181722 | orchestrator | Tuesday 17 February 2026 05:38:25 +0000 (0:00:02.052) 0:01:17.389 ****** 2026-02-17 05:40:02.181734 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:40:02.181747 | orchestrator | 2026-02-17 05:40:02.181760 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-17 05:40:02.181773 | orchestrator | Tuesday 17 February 2026 05:38:34 +0000 (0:00:08.865) 0:01:26.255 ****** 2026-02-17 05:40:02.181786 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:40:02.181799 | orchestrator | 2026-02-17 05:40:02.181811 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-17 05:40:02.181823 | orchestrator | Tuesday 17 February 2026 05:38:43 +0000 (0:00:09.266) 0:01:35.521 ****** 2026-02-17 05:40:02.181836 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:40:02.181849 | orchestrator | 2026-02-17 05:40:02.181861 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-17 05:40:02.181873 | orchestrator | 2026-02-17 05:40:02.181884 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-17 05:40:02.181895 | orchestrator | Tuesday 17 February 2026 05:38:53 +0000 (0:00:09.432) 0:01:44.953 ****** 2026-02-17 05:40:02.181906 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:40:02.181916 | orchestrator | 2026-02-17 05:40:02.181927 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-17 05:40:02.181948 | orchestrator | Tuesday 17 February 2026 05:38:54 +0000 (0:00:01.763) 0:01:46.716 ****** 2026-02-17 05:40:02.181959 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:40:02.181970 | orchestrator | 2026-02-17 05:40:02.181980 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-17 05:40:02.181991 | orchestrator | Tuesday 17 February 2026 05:39:04 +0000 (0:00:09.658) 0:01:56.375 ****** 2026-02-17 05:40:02.182002 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:40:02.182068 | orchestrator | 2026-02-17 05:40:02.182082 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-17 05:40:02.182093 | orchestrator | Tuesday 17 February 2026 05:39:18 +0000 (0:00:14.286) 0:02:10.661 ****** 2026-02-17 05:40:02.182104 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:40:02.182115 | orchestrator | 2026-02-17 05:40:02.182126 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-17 05:40:02.182137 | orchestrator | 2026-02-17 05:40:02.182157 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-17 05:40:02.182169 | orchestrator | Tuesday 17 February 2026 05:39:28 +0000 (0:00:09.266) 0:02:19.928 ****** 2026-02-17 05:40:02.182180 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:40:02.182190 | orchestrator | 2026-02-17 05:40:02.182201 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-17 05:40:02.182212 | orchestrator | Tuesday 17 February 2026 05:39:29 +0000 (0:00:01.722) 0:02:21.651 ****** 2026-02-17 05:40:02.182224 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:40:02.182235 | orchestrator | 2026-02-17 05:40:02.182246 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-17 05:40:02.182257 | orchestrator | Tuesday 17 February 2026 05:39:38 +0000 (0:00:08.531) 0:02:30.183 ****** 2026-02-17 05:40:02.182268 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:40:02.182279 | orchestrator | 2026-02-17 05:40:02.182290 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-17 05:40:02.182301 | orchestrator | Tuesday 17 February 2026 05:39:52 +0000 (0:00:13.776) 0:02:43.959 ****** 2026-02-17 05:40:02.182312 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:40:02.182322 | orchestrator | 2026-02-17 05:40:02.182333 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-17 05:40:02.182344 | orchestrator | 2026-02-17 05:40:02.182355 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-17 05:40:02.182376 | orchestrator | Tuesday 17 February 2026 05:40:02 +0000 (0:00:09.974) 0:02:53.934 ****** 2026-02-17 05:40:08.519600 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:40:08.519690 | orchestrator | 2026-02-17 05:40:08.519705 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-17 05:40:08.519717 | orchestrator | Tuesday 17 February 2026 05:40:03 +0000 (0:00:01.376) 0:02:55.311 ****** 2026-02-17 05:40:08.519729 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:40:08.519740 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:40:08.519751 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:40:08.519762 | orchestrator | 2026-02-17 05:40:08.519773 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:40:08.519785 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 05:40:08.519797 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 05:40:08.519808 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-17 05:40:08.519819 | orchestrator | 2026-02-17 05:40:08.519830 | orchestrator | 2026-02-17 05:40:08.519841 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:40:08.519852 | orchestrator | Tuesday 17 February 2026 05:40:08 +0000 (0:00:04.678) 0:02:59.990 ****** 2026-02-17 05:40:08.519890 | orchestrator | =============================================================================== 2026-02-17 05:40:08.519902 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.33s 2026-02-17 05:40:08.519912 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 28.67s 2026-02-17 05:40:08.519923 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 27.06s 2026-02-17 05:40:08.520021 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.10s 2026-02-17 05:40:08.520042 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.54s 2026-02-17 05:40:08.520053 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.68s 2026-02-17 05:40:08.520064 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.66s 2026-02-17 05:40:08.520075 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.49s 2026-02-17 05:40:08.520086 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.11s 2026-02-17 05:40:08.520096 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.89s 2026-02-17 05:40:08.520107 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.84s 2026-02-17 05:40:08.520118 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.72s 2026-02-17 05:40:08.520129 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.53s 2026-02-17 05:40:08.520140 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.51s 2026-02-17 05:40:08.520150 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.48s 2026-02-17 05:40:08.520161 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.42s 2026-02-17 05:40:08.520172 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.40s 2026-02-17 05:40:08.520182 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.37s 2026-02-17 05:40:08.520193 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.34s 2026-02-17 05:40:08.520204 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.27s 2026-02-17 05:40:08.759152 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-17 05:40:10.732975 | orchestrator | 2026-02-17 05:40:10 | INFO  | Task 3d9aeeef-fb95-4833-a41f-101f607f2687 (openvswitch) was prepared for execution. 2026-02-17 05:40:10.733074 | orchestrator | 2026-02-17 05:40:10 | INFO  | It takes a moment until task 3d9aeeef-fb95-4833-a41f-101f607f2687 (openvswitch) has been started and output is visible here. 2026-02-17 05:40:38.448251 | orchestrator | 2026-02-17 05:40:38.448410 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 05:40:38.448430 | orchestrator | 2026-02-17 05:40:38.448444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 05:40:38.448456 | orchestrator | Tuesday 17 February 2026 05:40:16 +0000 (0:00:01.415) 0:00:01.416 ****** 2026-02-17 05:40:38.448527 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:40:38.449258 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:40:38.449279 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:40:38.449292 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:40:38.449303 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:40:38.449314 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:40:38.449326 | orchestrator | 2026-02-17 05:40:38.449337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 05:40:38.449349 | orchestrator | Tuesday 17 February 2026 05:40:18 +0000 (0:00:02.398) 0:00:03.814 ****** 2026-02-17 05:40:38.449360 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 05:40:38.449372 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 05:40:38.449383 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 05:40:38.449422 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 05:40:38.449434 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 05:40:38.449445 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-17 05:40:38.449456 | orchestrator | 2026-02-17 05:40:38.449467 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-17 05:40:38.449496 | orchestrator | 2026-02-17 05:40:38.449507 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-17 05:40:38.449518 | orchestrator | Tuesday 17 February 2026 05:40:22 +0000 (0:00:03.263) 0:00:07.077 ****** 2026-02-17 05:40:38.449530 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:40:38.449543 | orchestrator | 2026-02-17 05:40:38.449554 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-17 05:40:38.449565 | orchestrator | Tuesday 17 February 2026 05:40:24 +0000 (0:00:02.745) 0:00:09.823 ****** 2026-02-17 05:40:38.449576 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-17 05:40:38.449587 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-17 05:40:38.449598 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-17 05:40:38.449609 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-17 05:40:38.449620 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-17 05:40:38.449631 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-17 05:40:38.449641 | orchestrator | 2026-02-17 05:40:38.449652 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-17 05:40:38.449663 | orchestrator | Tuesday 17 February 2026 05:40:27 +0000 (0:00:02.475) 0:00:12.298 ****** 2026-02-17 05:40:38.449674 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-17 05:40:38.449684 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-17 05:40:38.449695 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-17 05:40:38.449706 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-17 05:40:38.449717 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-17 05:40:38.449727 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-17 05:40:38.449738 | orchestrator | 2026-02-17 05:40:38.449749 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-17 05:40:38.449760 | orchestrator | Tuesday 17 February 2026 05:40:30 +0000 (0:00:03.071) 0:00:15.370 ****** 2026-02-17 05:40:38.449771 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-17 05:40:38.449782 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:40:38.449794 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-17 05:40:38.449805 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:40:38.449816 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-17 05:40:38.449827 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:40:38.449837 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-17 05:40:38.449848 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:40:38.449859 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-17 05:40:38.449870 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:40:38.449880 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-17 05:40:38.449892 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:40:38.449902 | orchestrator | 2026-02-17 05:40:38.449913 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-17 05:40:38.449924 | orchestrator | Tuesday 17 February 2026 05:40:33 +0000 (0:00:03.051) 0:00:18.421 ****** 2026-02-17 05:40:38.449935 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:40:38.449946 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:40:38.449957 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:40:38.449976 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:40:38.449987 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:40:38.449998 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:40:38.450009 | orchestrator | 2026-02-17 05:40:38.450106 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-17 05:40:38.450126 | orchestrator | Tuesday 17 February 2026 05:40:35 +0000 (0:00:02.241) 0:00:20.663 ****** 2026-02-17 05:40:38.450194 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:38.450224 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:38.450243 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:38.450262 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:38.450281 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:38.450324 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:38.450351 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750105 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750226 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750243 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750255 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750306 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750320 | orchestrator | 2026-02-17 05:40:40.750333 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-17 05:40:40.750345 | orchestrator | Tuesday 17 February 2026 05:40:38 +0000 (0:00:02.775) 0:00:23.439 ****** 2026-02-17 05:40:40.750378 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750393 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750404 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750416 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750441 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750453 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:40.750504 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809348 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809453 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809554 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809584 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809596 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809608 | orchestrator | 2026-02-17 05:40:46.809621 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-17 05:40:46.809633 | orchestrator | Tuesday 17 February 2026 05:40:42 +0000 (0:00:03.613) 0:00:27.052 ****** 2026-02-17 05:40:46.809644 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:40:46.809656 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:40:46.809667 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:40:46.809678 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:40:46.809689 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:40:46.809700 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:40:46.809711 | orchestrator | 2026-02-17 05:40:46.809723 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-17 05:40:46.809751 | orchestrator | Tuesday 17 February 2026 05:40:44 +0000 (0:00:02.623) 0:00:29.675 ****** 2026-02-17 05:40:46.809764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:46.809844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-17 05:40:51.067058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:51.067192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:51.067209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:51.067236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:51.067249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:51.067277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-17 05:40:51.067290 | orchestrator | 2026-02-17 05:40:51.067312 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-17 05:40:51.067325 | orchestrator | Tuesday 17 February 2026 05:40:48 +0000 (0:00:03.702) 0:00:33.377 ****** 2026-02-17 05:40:51.067337 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:40:51.067350 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:51.067361 | orchestrator | } 2026-02-17 05:40:51.067373 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:40:51.067409 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:51.067420 | orchestrator | } 2026-02-17 05:40:51.067432 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:40:51.067443 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:51.067454 | orchestrator | } 2026-02-17 05:40:51.067506 | orchestrator | changed: [testbed-node-3] => { 2026-02-17 05:40:51.067518 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:51.067529 | orchestrator | } 2026-02-17 05:40:51.067540 | orchestrator | changed: [testbed-node-4] => { 2026-02-17 05:40:51.067551 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:51.067562 | orchestrator | } 2026-02-17 05:40:51.067573 | orchestrator | changed: [testbed-node-5] => { 2026-02-17 05:40:51.067585 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:40:51.067596 | orchestrator | } 2026-02-17 05:40:51.067607 | orchestrator | 2026-02-17 05:40:51.067619 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:40:51.067631 | orchestrator | Tuesday 17 February 2026 05:40:50 +0000 (0:00:02.114) 0:00:35.492 ****** 2026-02-17 05:40:51.067643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-17 05:40:51.067662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-17 05:40:51.067675 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:40:51.067687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-17 05:40:51.067699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-17 05:40:51.067739 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:41:22.906620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-17 05:41:22.906774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-17 05:41:22.906805 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:41:22.906829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-17 05:41:22.906874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-17 05:41:22.906895 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:41:22.906916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-17 05:41:22.906993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-17 05:41:22.907016 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:41:22.907038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-17 05:41:22.907062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-17 05:41:22.907082 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:41:22.907104 | orchestrator | 2026-02-17 05:41:22.907128 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 05:41:22.907151 | orchestrator | Tuesday 17 February 2026 05:40:53 +0000 (0:00:02.958) 0:00:38.451 ****** 2026-02-17 05:41:22.907172 | orchestrator | 2026-02-17 05:41:22.907193 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 05:41:22.907215 | orchestrator | Tuesday 17 February 2026 05:40:53 +0000 (0:00:00.532) 0:00:38.984 ****** 2026-02-17 05:41:22.907236 | orchestrator | 2026-02-17 05:41:22.907258 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 05:41:22.907290 | orchestrator | Tuesday 17 February 2026 05:40:54 +0000 (0:00:00.553) 0:00:39.538 ****** 2026-02-17 05:41:22.907312 | orchestrator | 2026-02-17 05:41:22.907333 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 05:41:22.907353 | orchestrator | Tuesday 17 February 2026 05:40:55 +0000 (0:00:00.551) 0:00:40.089 ****** 2026-02-17 05:41:22.907372 | orchestrator | 2026-02-17 05:41:22.907392 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 05:41:22.907410 | orchestrator | Tuesday 17 February 2026 05:40:56 +0000 (0:00:00.930) 0:00:41.019 ****** 2026-02-17 05:41:22.907442 | orchestrator | 2026-02-17 05:41:22.907492 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-17 05:41:22.907510 | orchestrator | Tuesday 17 February 2026 05:40:56 +0000 (0:00:00.550) 0:00:41.569 ****** 2026-02-17 05:41:22.907529 | orchestrator | 2026-02-17 05:41:22.907547 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-17 05:41:22.907565 | orchestrator | Tuesday 17 February 2026 05:40:57 +0000 (0:00:00.882) 0:00:42.452 ****** 2026-02-17 05:41:22.907582 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:41:22.907600 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:41:22.907617 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:41:22.907636 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:41:22.907655 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:41:22.907674 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:41:22.907694 | orchestrator | 2026-02-17 05:41:22.907713 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-17 05:41:22.907733 | orchestrator | Tuesday 17 February 2026 05:41:09 +0000 (0:00:11.671) 0:00:54.124 ****** 2026-02-17 05:41:22.907752 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:41:22.907772 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:41:22.907790 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:41:22.907808 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:41:22.907825 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:41:22.907835 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:41:22.907846 | orchestrator | 2026-02-17 05:41:22.907857 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-17 05:41:22.907868 | orchestrator | Tuesday 17 February 2026 05:41:11 +0000 (0:00:02.252) 0:00:56.376 ****** 2026-02-17 05:41:22.907879 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:41:22.907890 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:41:22.907901 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:41:22.907911 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:41:22.907922 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:41:22.907933 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:41:22.907944 | orchestrator | 2026-02-17 05:41:22.907955 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-17 05:41:22.907981 | orchestrator | Tuesday 17 February 2026 05:41:22 +0000 (0:00:11.522) 0:01:07.899 ****** 2026-02-17 05:41:39.911533 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-17 05:41:39.911648 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-17 05:41:39.911664 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-17 05:41:39.911676 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-17 05:41:39.911687 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-17 05:41:39.911698 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-17 05:41:39.911709 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-17 05:41:39.911720 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-17 05:41:39.911731 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-17 05:41:39.911742 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-17 05:41:39.911753 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-17 05:41:39.911764 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-17 05:41:39.911800 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 05:41:39.911812 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 05:41:39.911824 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 05:41:39.911835 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 05:41:39.911846 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 05:41:39.911857 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-17 05:41:39.911868 | orchestrator | 2026-02-17 05:41:39.911880 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-17 05:41:39.911892 | orchestrator | Tuesday 17 February 2026 05:41:30 +0000 (0:00:07.586) 0:01:15.486 ****** 2026-02-17 05:41:39.911919 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-17 05:41:39.911931 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:41:39.911943 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-17 05:41:39.911954 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:41:39.911965 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-17 05:41:39.911976 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:41:39.911987 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-17 05:41:39.912000 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-17 05:41:39.912012 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-17 05:41:39.912024 | orchestrator | 2026-02-17 05:41:39.912036 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-17 05:41:39.912050 | orchestrator | Tuesday 17 February 2026 05:41:33 +0000 (0:00:03.310) 0:01:18.797 ****** 2026-02-17 05:41:39.912062 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-17 05:41:39.912075 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:41:39.912087 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-17 05:41:39.912099 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:41:39.912111 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-17 05:41:39.912124 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:41:39.912137 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-17 05:41:39.912149 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-17 05:41:39.912161 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-17 05:41:39.912174 | orchestrator | 2026-02-17 05:41:39.912187 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:41:39.912201 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 05:41:39.912215 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 05:41:39.912228 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-17 05:41:39.912240 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:41:39.912269 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:41:39.912280 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-17 05:41:39.912299 | orchestrator | 2026-02-17 05:41:39.912310 | orchestrator | 2026-02-17 05:41:39.912321 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:41:39.912332 | orchestrator | Tuesday 17 February 2026 05:41:39 +0000 (0:00:05.569) 0:01:24.367 ****** 2026-02-17 05:41:39.912343 | orchestrator | =============================================================================== 2026-02-17 05:41:39.912354 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.67s 2026-02-17 05:41:39.912365 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.52s 2026-02-17 05:41:39.912376 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.59s 2026-02-17 05:41:39.912387 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.57s 2026-02-17 05:41:39.912398 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 4.00s 2026-02-17 05:41:39.912409 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.70s 2026-02-17 05:41:39.912419 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.61s 2026-02-17 05:41:39.912430 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.31s 2026-02-17 05:41:39.912473 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.26s 2026-02-17 05:41:39.912485 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.07s 2026-02-17 05:41:39.912496 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.05s 2026-02-17 05:41:39.912507 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.96s 2026-02-17 05:41:39.912518 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.78s 2026-02-17 05:41:39.912529 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.75s 2026-02-17 05:41:39.912540 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.62s 2026-02-17 05:41:39.912550 | orchestrator | module-load : Load modules ---------------------------------------------- 2.48s 2026-02-17 05:41:39.912561 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.40s 2026-02-17 05:41:39.912572 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.25s 2026-02-17 05:41:39.912582 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.24s 2026-02-17 05:41:39.912593 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.11s 2026-02-17 05:41:40.403173 | orchestrator | + osism apply -a upgrade ovn 2026-02-17 05:41:42.604316 | orchestrator | 2026-02-17 05:41:42 | INFO  | Task 10abdef5-ce73-4865-a91f-7f449593df48 (ovn) was prepared for execution. 2026-02-17 05:41:42.604418 | orchestrator | 2026-02-17 05:41:42 | INFO  | It takes a moment until task 10abdef5-ce73-4865-a91f-7f449593df48 (ovn) has been started and output is visible here. 2026-02-17 05:41:56.725661 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-17 05:41:56.725781 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-17 05:41:56.725809 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-17 05:41:56.725819 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-17 05:41:56.725839 | orchestrator | 2026-02-17 05:41:56.725849 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-17 05:41:56.725859 | orchestrator | 2026-02-17 05:41:56.725869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-17 05:41:56.725879 | orchestrator | Tuesday 17 February 2026 05:41:47 +0000 (0:00:00.893) 0:00:00.893 ****** 2026-02-17 05:41:56.725912 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:41:56.725923 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:41:56.725933 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:41:56.725942 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:41:56.725952 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:41:56.725962 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:41:56.725971 | orchestrator | 2026-02-17 05:41:56.725981 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-17 05:41:56.725992 | orchestrator | Tuesday 17 February 2026 05:41:49 +0000 (0:00:01.655) 0:00:02.548 ****** 2026-02-17 05:41:56.726001 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-17 05:41:56.726012 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-17 05:41:56.726088 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-17 05:41:56.726107 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-17 05:41:56.726125 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-17 05:41:56.726143 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-17 05:41:56.726153 | orchestrator | 2026-02-17 05:41:56.726163 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-17 05:41:56.726172 | orchestrator | 2026-02-17 05:41:56.726184 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-17 05:41:56.726196 | orchestrator | Tuesday 17 February 2026 05:41:50 +0000 (0:00:01.237) 0:00:03.786 ****** 2026-02-17 05:41:56.726208 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:41:56.726220 | orchestrator | 2026-02-17 05:41:56.726231 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-17 05:41:56.726242 | orchestrator | Tuesday 17 February 2026 05:41:52 +0000 (0:00:01.723) 0:00:05.509 ****** 2026-02-17 05:41:56.726256 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726269 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726281 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726292 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726335 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726357 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726369 | orchestrator | 2026-02-17 05:41:56.726380 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-17 05:41:56.726391 | orchestrator | Tuesday 17 February 2026 05:41:54 +0000 (0:00:01.627) 0:00:07.136 ****** 2026-02-17 05:41:56.726403 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726414 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726425 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726469 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726480 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726492 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726503 | orchestrator | 2026-02-17 05:41:56.726515 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-17 05:41:56.726526 | orchestrator | Tuesday 17 February 2026 05:41:55 +0000 (0:00:01.556) 0:00:08.693 ****** 2026-02-17 05:41:56.726549 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:41:56.726568 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896650 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896754 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896768 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896779 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896790 | orchestrator | 2026-02-17 05:42:01.896801 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-17 05:42:01.896812 | orchestrator | Tuesday 17 February 2026 05:41:57 +0000 (0:00:01.571) 0:00:10.264 ****** 2026-02-17 05:42:01.896822 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896833 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896843 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896889 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896915 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896926 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896936 | orchestrator | 2026-02-17 05:42:01.896946 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-17 05:42:01.896956 | orchestrator | Tuesday 17 February 2026 05:41:59 +0000 (0:00:02.154) 0:00:12.418 ****** 2026-02-17 05:42:01.896968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896981 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.896992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.897002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.897012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.897029 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:42:01.897040 | orchestrator | 2026-02-17 05:42:01.897055 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-17 05:42:01.897066 | orchestrator | Tuesday 17 February 2026 05:42:00 +0000 (0:00:01.427) 0:00:13.846 ****** 2026-02-17 05:42:01.897076 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:42:01.897087 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:42:01.897097 | orchestrator | } 2026-02-17 05:42:01.897108 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:42:01.897117 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:42:01.897127 | orchestrator | } 2026-02-17 05:42:01.897137 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:42:01.897147 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:42:01.897156 | orchestrator | } 2026-02-17 05:42:01.897166 | orchestrator | changed: [testbed-node-3] => { 2026-02-17 05:42:01.897178 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:42:01.897189 | orchestrator | } 2026-02-17 05:42:01.897202 | orchestrator | changed: [testbed-node-4] => { 2026-02-17 05:42:01.897213 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:42:01.897225 | orchestrator | } 2026-02-17 05:42:01.897242 | orchestrator | changed: [testbed-node-5] => { 2026-02-17 05:42:26.715604 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:42:26.715710 | orchestrator | } 2026-02-17 05:42:26.715725 | orchestrator | 2026-02-17 05:42:26.715738 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:42:26.715750 | orchestrator | Tuesday 17 February 2026 05:42:01 +0000 (0:00:01.038) 0:00:14.885 ****** 2026-02-17 05:42:26.715764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:42:26.715779 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:42:26.715791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:42:26.715802 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:42:26.715813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:42:26.715851 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:42:26.715863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:42:26.715874 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:42:26.715885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:42:26.715897 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:42:26.715908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:42:26.715919 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:42:26.715930 | orchestrator | 2026-02-17 05:42:26.715942 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-17 05:42:26.715953 | orchestrator | Tuesday 17 February 2026 05:42:03 +0000 (0:00:02.101) 0:00:16.987 ****** 2026-02-17 05:42:26.715964 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:42:26.715976 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:42:26.715987 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:42:26.715998 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:42:26.716023 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:42:26.716035 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:42:26.716045 | orchestrator | 2026-02-17 05:42:26.716074 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-17 05:42:26.716086 | orchestrator | Tuesday 17 February 2026 05:42:06 +0000 (0:00:02.529) 0:00:19.517 ****** 2026-02-17 05:42:26.716097 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-17 05:42:26.716109 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-17 05:42:26.716131 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-17 05:42:26.716161 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-17 05:42:26.716180 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-17 05:42:26.716200 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-17 05:42:26.716220 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-17 05:42:26.716239 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-17 05:42:26.716260 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 05:42:26.716279 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 05:42:26.716295 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 05:42:26.716306 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 05:42:26.716326 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 05:42:26.716337 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-17 05:42:26.716348 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-17 05:42:26.716361 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-17 05:42:26.716372 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-17 05:42:26.716383 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-17 05:42:26.716394 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-17 05:42:26.716405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-17 05:42:26.716416 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 05:42:26.716462 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 05:42:26.716474 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 05:42:26.716485 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 05:42:26.716495 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 05:42:26.716507 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-17 05:42:26.716517 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 05:42:26.716528 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 05:42:26.716539 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 05:42:26.716550 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 05:42:26.716561 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 05:42:26.716572 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-17 05:42:26.716583 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 05:42:26.716594 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 05:42:26.716605 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 05:42:26.716616 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 05:42:26.716627 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 05:42:26.716644 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-17 05:42:26.716655 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-17 05:42:26.716667 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-17 05:42:26.716678 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-17 05:42:26.716689 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-17 05:42:26.716715 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-17 05:44:50.703867 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-17 05:44:50.703984 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-17 05:44:50.704002 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-17 05:44:50.704014 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-17 05:44:50.704025 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-17 05:44:50.704036 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-17 05:44:50.704047 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-17 05:44:50.704059 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-17 05:44:50.704071 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-17 05:44:50.704083 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-17 05:44:50.704094 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-17 05:44:50.704105 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-17 05:44:50.704117 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-17 05:44:50.704128 | orchestrator | 2026-02-17 05:44:50.704141 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 05:44:50.704152 | orchestrator | Tuesday 17 February 2026 05:42:26 +0000 (0:00:19.668) 0:00:39.186 ****** 2026-02-17 05:44:50.704164 | orchestrator | 2026-02-17 05:44:50.704175 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 05:44:50.704186 | orchestrator | Tuesday 17 February 2026 05:42:26 +0000 (0:00:00.087) 0:00:39.273 ****** 2026-02-17 05:44:50.704197 | orchestrator | 2026-02-17 05:44:50.704209 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 05:44:50.704220 | orchestrator | Tuesday 17 February 2026 05:42:26 +0000 (0:00:00.083) 0:00:39.356 ****** 2026-02-17 05:44:50.704231 | orchestrator | 2026-02-17 05:44:50.704242 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 05:44:50.704253 | orchestrator | Tuesday 17 February 2026 05:42:26 +0000 (0:00:00.084) 0:00:39.441 ****** 2026-02-17 05:44:50.704264 | orchestrator | 2026-02-17 05:44:50.704275 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 05:44:50.704286 | orchestrator | Tuesday 17 February 2026 05:42:26 +0000 (0:00:00.076) 0:00:39.517 ****** 2026-02-17 05:44:50.704297 | orchestrator | 2026-02-17 05:44:50.704308 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-17 05:44:50.704319 | orchestrator | Tuesday 17 February 2026 05:42:26 +0000 (0:00:00.075) 0:00:39.593 ****** 2026-02-17 05:44:50.704330 | orchestrator | 2026-02-17 05:44:50.704341 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-17 05:44:50.704352 | orchestrator | Tuesday 17 February 2026 05:42:26 +0000 (0:00:00.076) 0:00:39.670 ****** 2026-02-17 05:44:50.704364 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:44:50.704445 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:44:50.704466 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:44:50.704486 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:44:50.704500 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:44:50.704513 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:44:50.704526 | orchestrator | 2026-02-17 05:44:50.704538 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-17 05:44:50.704551 | orchestrator | 2026-02-17 05:44:50.704563 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-17 05:44:50.704576 | orchestrator | Tuesday 17 February 2026 05:44:37 +0000 (0:02:11.131) 0:02:50.802 ****** 2026-02-17 05:44:50.704589 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:44:50.704601 | orchestrator | 2026-02-17 05:44:50.704630 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-17 05:44:50.704642 | orchestrator | Tuesday 17 February 2026 05:44:39 +0000 (0:00:01.347) 0:02:52.149 ****** 2026-02-17 05:44:50.704654 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-17 05:44:50.704665 | orchestrator | 2026-02-17 05:44:50.704676 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-17 05:44:50.704686 | orchestrator | Tuesday 17 February 2026 05:44:40 +0000 (0:00:01.272) 0:02:53.422 ****** 2026-02-17 05:44:50.704697 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.704709 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.704720 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.704731 | orchestrator | 2026-02-17 05:44:50.704742 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-17 05:44:50.704770 | orchestrator | Tuesday 17 February 2026 05:44:41 +0000 (0:00:00.886) 0:02:54.308 ****** 2026-02-17 05:44:50.704781 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.704792 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.704803 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.704813 | orchestrator | 2026-02-17 05:44:50.704824 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-17 05:44:50.704835 | orchestrator | Tuesday 17 February 2026 05:44:41 +0000 (0:00:00.424) 0:02:54.733 ****** 2026-02-17 05:44:50.704846 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.704857 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.704868 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.704878 | orchestrator | 2026-02-17 05:44:50.704889 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-17 05:44:50.704900 | orchestrator | Tuesday 17 February 2026 05:44:42 +0000 (0:00:00.406) 0:02:55.139 ****** 2026-02-17 05:44:50.704911 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.704922 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.704932 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.704943 | orchestrator | 2026-02-17 05:44:50.704954 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-17 05:44:50.704965 | orchestrator | Tuesday 17 February 2026 05:44:42 +0000 (0:00:00.691) 0:02:55.831 ****** 2026-02-17 05:44:50.704975 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.704986 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.704997 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705008 | orchestrator | 2026-02-17 05:44:50.705018 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-17 05:44:50.705029 | orchestrator | Tuesday 17 February 2026 05:44:43 +0000 (0:00:00.420) 0:02:56.252 ****** 2026-02-17 05:44:50.705040 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:44:50.705051 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:44:50.705062 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:44:50.705078 | orchestrator | 2026-02-17 05:44:50.705097 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-17 05:44:50.705114 | orchestrator | Tuesday 17 February 2026 05:44:43 +0000 (0:00:00.369) 0:02:56.621 ****** 2026-02-17 05:44:50.705146 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.705158 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.705169 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705179 | orchestrator | 2026-02-17 05:44:50.705190 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-17 05:44:50.705201 | orchestrator | Tuesday 17 February 2026 05:44:44 +0000 (0:00:00.784) 0:02:57.406 ****** 2026-02-17 05:44:50.705212 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.705223 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.705233 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705244 | orchestrator | 2026-02-17 05:44:50.705255 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-17 05:44:50.705266 | orchestrator | Tuesday 17 February 2026 05:44:45 +0000 (0:00:00.662) 0:02:58.069 ****** 2026-02-17 05:44:50.705277 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.705287 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.705298 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705309 | orchestrator | 2026-02-17 05:44:50.705319 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-17 05:44:50.705330 | orchestrator | Tuesday 17 February 2026 05:44:45 +0000 (0:00:00.900) 0:02:58.969 ****** 2026-02-17 05:44:50.705341 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.705352 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.705362 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705397 | orchestrator | 2026-02-17 05:44:50.705408 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-17 05:44:50.705419 | orchestrator | Tuesday 17 February 2026 05:44:46 +0000 (0:00:00.387) 0:02:59.357 ****** 2026-02-17 05:44:50.705430 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:44:50.705441 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:44:50.705452 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:44:50.705463 | orchestrator | 2026-02-17 05:44:50.705474 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-17 05:44:50.705485 | orchestrator | Tuesday 17 February 2026 05:44:46 +0000 (0:00:00.645) 0:03:00.003 ****** 2026-02-17 05:44:50.705496 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:44:50.705507 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:44:50.705518 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:44:50.705528 | orchestrator | 2026-02-17 05:44:50.705556 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-17 05:44:50.705567 | orchestrator | Tuesday 17 February 2026 05:44:47 +0000 (0:00:00.389) 0:03:00.393 ****** 2026-02-17 05:44:50.705578 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.705589 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.705600 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705611 | orchestrator | 2026-02-17 05:44:50.705622 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-17 05:44:50.705633 | orchestrator | Tuesday 17 February 2026 05:44:48 +0000 (0:00:00.811) 0:03:01.205 ****** 2026-02-17 05:44:50.705643 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.705654 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.705665 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705676 | orchestrator | 2026-02-17 05:44:50.705687 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-17 05:44:50.705698 | orchestrator | Tuesday 17 February 2026 05:44:48 +0000 (0:00:00.417) 0:03:01.622 ****** 2026-02-17 05:44:50.705716 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.705727 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.705738 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705749 | orchestrator | 2026-02-17 05:44:50.705760 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-17 05:44:50.705771 | orchestrator | Tuesday 17 February 2026 05:44:49 +0000 (0:00:01.212) 0:03:02.834 ****** 2026-02-17 05:44:50.705782 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:44:50.705792 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:44:50.705811 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:44:50.705821 | orchestrator | 2026-02-17 05:44:50.705832 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-17 05:44:50.705843 | orchestrator | Tuesday 17 February 2026 05:44:50 +0000 (0:00:00.428) 0:03:03.263 ****** 2026-02-17 05:44:50.705854 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:44:50.705865 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:44:50.705876 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:44:50.705887 | orchestrator | 2026-02-17 05:44:50.705914 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-17 05:45:00.024609 | orchestrator | Tuesday 17 February 2026 05:44:50 +0000 (0:00:00.430) 0:03:03.694 ****** 2026-02-17 05:45:00.024740 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:45:00.024759 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:45:00.024771 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:45:00.024783 | orchestrator | 2026-02-17 05:45:00.024796 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-17 05:45:00.024807 | orchestrator | Tuesday 17 February 2026 05:44:51 +0000 (0:00:00.722) 0:03:04.416 ****** 2026-02-17 05:45:00.024822 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.024838 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.024851 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.024864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.024893 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.024943 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.024976 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.024988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:00.025001 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.025012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:00.025024 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.025035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:00.025047 | orchestrator | 2026-02-17 05:45:00.025059 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-17 05:45:00.025070 | orchestrator | Tuesday 17 February 2026 05:44:54 +0000 (0:00:03.272) 0:03:07.689 ****** 2026-02-17 05:45:00.025082 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.025110 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:00.025133 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.554887 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.554999 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555015 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555040 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:10.555104 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:10.555148 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:10.555173 | orchestrator | 2026-02-17 05:45:10.555186 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-17 05:45:10.555198 | orchestrator | Tuesday 17 February 2026 05:45:00 +0000 (0:00:05.327) 0:03:13.016 ****** 2026-02-17 05:45:10.555210 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-17 05:45:10.555221 | orchestrator | 2026-02-17 05:45:10.555232 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-17 05:45:10.555243 | orchestrator | Tuesday 17 February 2026 05:45:01 +0000 (0:00:01.033) 0:03:14.050 ****** 2026-02-17 05:45:10.555255 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:45:10.555267 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:45:10.555278 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:45:10.555289 | orchestrator | 2026-02-17 05:45:10.555300 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-17 05:45:10.555311 | orchestrator | Tuesday 17 February 2026 05:45:02 +0000 (0:00:01.010) 0:03:15.061 ****** 2026-02-17 05:45:10.555322 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:45:10.555334 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:45:10.555344 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:45:10.555355 | orchestrator | 2026-02-17 05:45:10.555398 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-17 05:45:10.555409 | orchestrator | Tuesday 17 February 2026 05:45:03 +0000 (0:00:01.694) 0:03:16.756 ****** 2026-02-17 05:45:10.555421 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:45:10.555440 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:45:10.555452 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:45:10.555462 | orchestrator | 2026-02-17 05:45:10.555473 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-17 05:45:10.555484 | orchestrator | Tuesday 17 February 2026 05:45:05 +0000 (0:00:02.018) 0:03:18.775 ****** 2026-02-17 05:45:10.555497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:10.555560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:13.676223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:13.676333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:13.676420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:13.676463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:45:13.676487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676499 | orchestrator | 2026-02-17 05:45:13.676512 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-17 05:45:13.676525 | orchestrator | Tuesday 17 February 2026 05:45:10 +0000 (0:00:04.758) 0:03:23.534 ****** 2026-02-17 05:45:13.676537 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:45:13.676549 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:45:13.676560 | orchestrator | } 2026-02-17 05:45:13.676571 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:45:13.676582 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:45:13.676593 | orchestrator | } 2026-02-17 05:45:13.676605 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:45:13.676616 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:45:13.676627 | orchestrator | } 2026-02-17 05:45:13.676638 | orchestrator | 2026-02-17 05:45:13.676666 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-17 05:45:13.676678 | orchestrator | Tuesday 17 February 2026 05:45:11 +0000 (0:00:00.516) 0:03:24.050 ****** 2026-02-17 05:45:13.676698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:45:13.676807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:46:34.827964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-17 05:46:34.828047 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-17 05:46:34.828054 | orchestrator | 2026-02-17 05:46:34.828061 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-17 05:46:34.828066 | orchestrator | Tuesday 17 February 2026 05:45:13 +0000 (0:00:02.616) 0:03:26.667 ****** 2026-02-17 05:46:34.828072 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-17 05:46:34.828076 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-17 05:46:34.828080 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-17 05:46:34.828085 | orchestrator | 2026-02-17 05:46:34.828089 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-17 05:46:34.828094 | orchestrator | Tuesday 17 February 2026 05:45:15 +0000 (0:00:01.366) 0:03:28.033 ****** 2026-02-17 05:46:34.828098 | orchestrator | changed: [testbed-node-0] => { 2026-02-17 05:46:34.828104 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:46:34.828108 | orchestrator | } 2026-02-17 05:46:34.828113 | orchestrator | changed: [testbed-node-1] => { 2026-02-17 05:46:34.828117 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:46:34.828121 | orchestrator | } 2026-02-17 05:46:34.828125 | orchestrator | changed: [testbed-node-2] => { 2026-02-17 05:46:34.828130 | orchestrator |  "msg": "Notifying handlers" 2026-02-17 05:46:34.828134 | orchestrator | } 2026-02-17 05:46:34.828138 | orchestrator | 2026-02-17 05:46:34.828143 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 05:46:34.828147 | orchestrator | Tuesday 17 February 2026 05:45:15 +0000 (0:00:00.637) 0:03:28.671 ****** 2026-02-17 05:46:34.828151 | orchestrator | 2026-02-17 05:46:34.828165 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 05:46:34.828170 | orchestrator | Tuesday 17 February 2026 05:45:15 +0000 (0:00:00.077) 0:03:28.748 ****** 2026-02-17 05:46:34.828174 | orchestrator | 2026-02-17 05:46:34.828178 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-17 05:46:34.828182 | orchestrator | Tuesday 17 February 2026 05:45:15 +0000 (0:00:00.074) 0:03:28.823 ****** 2026-02-17 05:46:34.828186 | orchestrator | 2026-02-17 05:46:34.828191 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-17 05:46:34.828195 | orchestrator | Tuesday 17 February 2026 05:45:15 +0000 (0:00:00.073) 0:03:28.897 ****** 2026-02-17 05:46:34.828199 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:46:34.828218 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:46:34.828222 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:46:34.828227 | orchestrator | 2026-02-17 05:46:34.828231 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-17 05:46:34.828235 | orchestrator | Tuesday 17 February 2026 05:45:32 +0000 (0:00:16.736) 0:03:45.633 ****** 2026-02-17 05:46:34.828239 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:46:34.828243 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:46:34.828248 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:46:34.828252 | orchestrator | 2026-02-17 05:46:34.828256 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-17 05:46:34.828260 | orchestrator | Tuesday 17 February 2026 05:45:48 +0000 (0:00:16.228) 0:04:01.862 ****** 2026-02-17 05:46:34.828264 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-17 05:46:34.828269 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-17 05:46:34.828273 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-17 05:46:34.828277 | orchestrator | 2026-02-17 05:46:34.828281 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-17 05:46:34.828285 | orchestrator | Tuesday 17 February 2026 05:46:04 +0000 (0:00:15.867) 0:04:17.730 ****** 2026-02-17 05:46:34.828289 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:46:34.828293 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:46:34.828298 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:46:34.828302 | orchestrator | 2026-02-17 05:46:34.828306 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-17 05:46:34.828310 | orchestrator | Tuesday 17 February 2026 05:46:21 +0000 (0:00:17.120) 0:04:34.851 ****** 2026-02-17 05:46:34.828315 | orchestrator | Pausing for 5 seconds 2026-02-17 05:46:34.828319 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:46:34.828323 | orchestrator | 2026-02-17 05:46:34.828327 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-17 05:46:34.828332 | orchestrator | Tuesday 17 February 2026 05:46:27 +0000 (0:00:05.190) 0:04:40.041 ****** 2026-02-17 05:46:34.828365 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:46:34.828370 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:46:34.828374 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:46:34.828378 | orchestrator | 2026-02-17 05:46:34.828383 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-17 05:46:34.828396 | orchestrator | Tuesday 17 February 2026 05:46:27 +0000 (0:00:00.857) 0:04:40.898 ****** 2026-02-17 05:46:34.828401 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:46:34.828405 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:46:34.828409 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:46:34.828413 | orchestrator | 2026-02-17 05:46:34.828417 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-17 05:46:34.828422 | orchestrator | Tuesday 17 February 2026 05:46:28 +0000 (0:00:00.848) 0:04:41.747 ****** 2026-02-17 05:46:34.828426 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:46:34.828430 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:46:34.828434 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:46:34.828438 | orchestrator | 2026-02-17 05:46:34.828443 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-17 05:46:34.828447 | orchestrator | Tuesday 17 February 2026 05:46:29 +0000 (0:00:00.867) 0:04:42.614 ****** 2026-02-17 05:46:34.828451 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:46:34.828455 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:46:34.828459 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:46:34.828463 | orchestrator | 2026-02-17 05:46:34.828468 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-17 05:46:34.828472 | orchestrator | Tuesday 17 February 2026 05:46:30 +0000 (0:00:00.676) 0:04:43.291 ****** 2026-02-17 05:46:34.828476 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:46:34.828480 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:46:34.828484 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:46:34.828489 | orchestrator | 2026-02-17 05:46:34.828497 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-17 05:46:34.828501 | orchestrator | Tuesday 17 February 2026 05:46:31 +0000 (0:00:00.865) 0:04:44.156 ****** 2026-02-17 05:46:34.828505 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:46:34.828509 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:46:34.828514 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:46:34.828518 | orchestrator | 2026-02-17 05:46:34.828522 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-17 05:46:34.828526 | orchestrator | Tuesday 17 February 2026 05:46:32 +0000 (0:00:00.851) 0:04:45.007 ****** 2026-02-17 05:46:34.828530 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-17 05:46:34.828535 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-17 05:46:34.828540 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-17 05:46:34.828545 | orchestrator | 2026-02-17 05:46:34.828550 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 05:46:34.828556 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 05:46:34.828562 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-17 05:46:34.828567 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-17 05:46:34.828575 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:46:34.828580 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:46:34.828585 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 05:46:34.828589 | orchestrator | 2026-02-17 05:46:34.828594 | orchestrator | 2026-02-17 05:46:34.828599 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 05:46:34.828604 | orchestrator | Tuesday 17 February 2026 05:46:34 +0000 (0:00:02.792) 0:04:47.800 ****** 2026-02-17 05:46:34.828609 | orchestrator | =============================================================================== 2026-02-17 05:46:34.828614 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.13s 2026-02-17 05:46:34.828619 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.67s 2026-02-17 05:46:34.828623 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.12s 2026-02-17 05:46:34.828628 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.74s 2026-02-17 05:46:34.828633 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.23s 2026-02-17 05:46:34.828637 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 15.87s 2026-02-17 05:46:34.828642 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.33s 2026-02-17 05:46:34.828647 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 5.19s 2026-02-17 05:46:34.828652 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.76s 2026-02-17 05:46:34.828657 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.27s 2026-02-17 05:46:34.828662 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.79s 2026-02-17 05:46:34.828667 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.62s 2026-02-17 05:46:34.828672 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.53s 2026-02-17 05:46:34.828676 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.15s 2026-02-17 05:46:34.828681 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.10s 2026-02-17 05:46:34.828690 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.02s 2026-02-17 05:46:34.828697 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.72s 2026-02-17 05:46:35.322999 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.69s 2026-02-17 05:46:35.323082 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.66s 2026-02-17 05:46:35.323094 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.63s 2026-02-17 05:46:35.711259 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-17 05:46:35.711400 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-17 05:46:35.711418 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-17 05:46:35.724289 | orchestrator | + set -e 2026-02-17 05:46:35.724415 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 05:46:35.724430 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 05:46:35.724442 | orchestrator | ++ INTERACTIVE=false 2026-02-17 05:46:35.724451 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 05:46:35.724462 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 05:46:35.725508 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-17 05:46:37.933760 | orchestrator | 2026-02-17 05:46:37 | INFO  | Task 553a855b-137b-46ca-ad27-a40ae4b0b3d7 (ceph-rolling_update) was prepared for execution. 2026-02-17 05:46:37.934197 | orchestrator | 2026-02-17 05:46:37 | INFO  | It takes a moment until task 553a855b-137b-46ca-ad27-a40ae4b0b3d7 (ceph-rolling_update) has been started and output is visible here. 2026-02-17 05:48:07.893645 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-17 05:48:07.893816 | orchestrator | 2.16.14 2026-02-17 05:48:07.893843 | orchestrator | 2026-02-17 05:48:07.893862 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-17 05:48:07.893879 | orchestrator | 2026-02-17 05:48:07.893895 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-17 05:48:07.893913 | orchestrator | Tuesday 17 February 2026 05:46:46 +0000 (0:00:01.759) 0:00:01.759 ****** 2026-02-17 05:48:07.893930 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-17 05:48:07.893948 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-17 05:48:07.893966 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-17 05:48:07.893985 | orchestrator | skipping: [localhost] 2026-02-17 05:48:07.894005 | orchestrator | 2026-02-17 05:48:07.894098 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-17 05:48:07.894117 | orchestrator | 2026-02-17 05:48:07.894136 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-17 05:48:07.894157 | orchestrator | Tuesday 17 February 2026 05:46:48 +0000 (0:00:02.144) 0:00:03.904 ****** 2026-02-17 05:48:07.894179 | orchestrator | ok: [testbed-node-0] => { 2026-02-17 05:48:07.894203 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-17 05:48:07.894224 | orchestrator | } 2026-02-17 05:48:07.894246 | orchestrator | ok: [testbed-node-1] => { 2026-02-17 05:48:07.894286 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-17 05:48:07.894305 | orchestrator | } 2026-02-17 05:48:07.894347 | orchestrator | ok: [testbed-node-2] => { 2026-02-17 05:48:07.894368 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-17 05:48:07.894386 | orchestrator | } 2026-02-17 05:48:07.894404 | orchestrator | ok: [testbed-node-3] => { 2026-02-17 05:48:07.894422 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-17 05:48:07.894440 | orchestrator | } 2026-02-17 05:48:07.894458 | orchestrator | ok: [testbed-node-4] => { 2026-02-17 05:48:07.894475 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-17 05:48:07.894492 | orchestrator | } 2026-02-17 05:48:07.894540 | orchestrator | ok: [testbed-node-5] => { 2026-02-17 05:48:07.894560 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-17 05:48:07.894577 | orchestrator | } 2026-02-17 05:48:07.894594 | orchestrator | ok: [testbed-manager] => { 2026-02-17 05:48:07.894612 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-17 05:48:07.894630 | orchestrator | } 2026-02-17 05:48:07.894649 | orchestrator | 2026-02-17 05:48:07.894670 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-17 05:48:07.894689 | orchestrator | Tuesday 17 February 2026 05:46:54 +0000 (0:00:06.166) 0:00:10.070 ****** 2026-02-17 05:48:07.894707 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:07.894724 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:07.894740 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:07.894758 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:07.894776 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:07.894795 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:07.894807 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.894817 | orchestrator | 2026-02-17 05:48:07.894826 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-17 05:48:07.894837 | orchestrator | Tuesday 17 February 2026 05:47:03 +0000 (0:00:08.320) 0:00:18.391 ****** 2026-02-17 05:48:07.894847 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 05:48:07.894857 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 05:48:07.894866 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:48:07.894876 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 05:48:07.894885 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:48:07.894895 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:48:07.894905 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 05:48:07.894914 | orchestrator | 2026-02-17 05:48:07.894924 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-17 05:48:07.894934 | orchestrator | Tuesday 17 February 2026 05:47:35 +0000 (0:00:32.014) 0:00:50.406 ****** 2026-02-17 05:48:07.895019 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.895030 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.895040 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.895050 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.895059 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.895069 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.895079 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.895088 | orchestrator | 2026-02-17 05:48:07.895098 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 05:48:07.895108 | orchestrator | Tuesday 17 February 2026 05:47:37 +0000 (0:00:02.331) 0:00:52.737 ****** 2026-02-17 05:48:07.895119 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-17 05:48:07.895130 | orchestrator | 2026-02-17 05:48:07.895140 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 05:48:07.895150 | orchestrator | Tuesday 17 February 2026 05:47:40 +0000 (0:00:03.011) 0:00:55.748 ****** 2026-02-17 05:48:07.895160 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.895171 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.895180 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.895190 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.895200 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.895210 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.895219 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.895229 | orchestrator | 2026-02-17 05:48:07.895261 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 05:48:07.895286 | orchestrator | Tuesday 17 February 2026 05:47:43 +0000 (0:00:02.822) 0:00:58.571 ****** 2026-02-17 05:48:07.895296 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.895306 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.895315 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.895381 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.895391 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.895401 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.895411 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.895420 | orchestrator | 2026-02-17 05:48:07.895430 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 05:48:07.895440 | orchestrator | Tuesday 17 February 2026 05:47:45 +0000 (0:00:01.998) 0:01:00.569 ****** 2026-02-17 05:48:07.895450 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.895460 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.895469 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.895479 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.895489 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.895498 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.895508 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.895518 | orchestrator | 2026-02-17 05:48:07.895527 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 05:48:07.895537 | orchestrator | Tuesday 17 February 2026 05:47:47 +0000 (0:00:02.551) 0:01:03.121 ****** 2026-02-17 05:48:07.895547 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.895557 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.895567 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.895576 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.895586 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.895606 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.895616 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.895626 | orchestrator | 2026-02-17 05:48:07.895636 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 05:48:07.895646 | orchestrator | Tuesday 17 February 2026 05:47:49 +0000 (0:00:02.016) 0:01:05.137 ****** 2026-02-17 05:48:07.895656 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.895666 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.895676 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.895685 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.895695 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.895704 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.895714 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.895724 | orchestrator | 2026-02-17 05:48:07.895734 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 05:48:07.895744 | orchestrator | Tuesday 17 February 2026 05:47:52 +0000 (0:00:02.382) 0:01:07.520 ****** 2026-02-17 05:48:07.895753 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.895763 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.895773 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.895783 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.895792 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.895802 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.895812 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.895822 | orchestrator | 2026-02-17 05:48:07.895832 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 05:48:07.895841 | orchestrator | Tuesday 17 February 2026 05:47:54 +0000 (0:00:02.072) 0:01:09.593 ****** 2026-02-17 05:48:07.895851 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:07.895861 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:07.895871 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:07.895881 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:07.895890 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:07.895900 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:07.895910 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:07.895920 | orchestrator | 2026-02-17 05:48:07.895929 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 05:48:07.895947 | orchestrator | Tuesday 17 February 2026 05:47:56 +0000 (0:00:02.322) 0:01:11.916 ****** 2026-02-17 05:48:07.895957 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.895967 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.895976 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.895986 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.895996 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.896005 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.896015 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.896025 | orchestrator | 2026-02-17 05:48:07.896035 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 05:48:07.896044 | orchestrator | Tuesday 17 February 2026 05:47:59 +0000 (0:00:02.373) 0:01:14.289 ****** 2026-02-17 05:48:07.896054 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:48:07.896064 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:48:07.896074 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:48:07.896084 | orchestrator | 2026-02-17 05:48:07.896094 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 05:48:07.896103 | orchestrator | Tuesday 17 February 2026 05:48:00 +0000 (0:00:01.674) 0:01:15.964 ****** 2026-02-17 05:48:07.896113 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:07.896123 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:07.896133 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:07.896143 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:07.896152 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:07.896162 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:07.896172 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:07.896181 | orchestrator | 2026-02-17 05:48:07.896191 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 05:48:07.896201 | orchestrator | Tuesday 17 February 2026 05:48:03 +0000 (0:00:02.427) 0:01:18.391 ****** 2026-02-17 05:48:07.896211 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:48:07.896221 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:48:07.896231 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:48:07.896240 | orchestrator | 2026-02-17 05:48:07.896251 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 05:48:07.896261 | orchestrator | Tuesday 17 February 2026 05:48:06 +0000 (0:00:03.334) 0:01:21.726 ****** 2026-02-17 05:48:07.896278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 05:48:31.104753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 05:48:31.104858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 05:48:31.104872 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:31.104883 | orchestrator | 2026-02-17 05:48:31.104894 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 05:48:31.104905 | orchestrator | Tuesday 17 February 2026 05:48:07 +0000 (0:00:01.427) 0:01:23.154 ****** 2026-02-17 05:48:31.104917 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 05:48:31.104930 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 05:48:31.104940 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 05:48:31.104974 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:31.104985 | orchestrator | 2026-02-17 05:48:31.104995 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 05:48:31.105005 | orchestrator | Tuesday 17 February 2026 05:48:09 +0000 (0:00:02.067) 0:01:25.222 ****** 2026-02-17 05:48:31.105017 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:31.105029 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:31.105040 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:31.105049 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:31.105059 | orchestrator | 2026-02-17 05:48:31.105069 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 05:48:31.105079 | orchestrator | Tuesday 17 February 2026 05:48:11 +0000 (0:00:01.219) 0:01:26.441 ****** 2026-02-17 05:48:31.105091 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6b2dae68d29f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 05:48:03.764142', 'end': '2026-02-17 05:48:03.810180', 'delta': '0:00:00.046038', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b2dae68d29f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 05:48:31.105119 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5939893342f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 05:48:04.634973', 'end': '2026-02-17 05:48:04.681472', 'delta': '0:00:00.046499', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5939893342f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 05:48:31.105216 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4f72f9ce519e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 05:48:05.229679', 'end': '2026-02-17 05:48:05.273726', 'delta': '0:00:00.044047', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f72f9ce519e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 05:48:31.105243 | orchestrator | 2026-02-17 05:48:31.105253 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 05:48:31.105268 | orchestrator | Tuesday 17 February 2026 05:48:12 +0000 (0:00:01.246) 0:01:27.687 ****** 2026-02-17 05:48:31.105278 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:31.105289 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:31.105301 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:31.105312 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:31.105323 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:31.105362 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:31.105374 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:31.105385 | orchestrator | 2026-02-17 05:48:31.105397 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 05:48:31.105408 | orchestrator | Tuesday 17 February 2026 05:48:14 +0000 (0:00:02.220) 0:01:29.908 ****** 2026-02-17 05:48:31.105419 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:31.105430 | orchestrator | 2026-02-17 05:48:31.105441 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 05:48:31.105451 | orchestrator | Tuesday 17 February 2026 05:48:15 +0000 (0:00:01.271) 0:01:31.180 ****** 2026-02-17 05:48:31.105462 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:31.105473 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:31.105484 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:31.105495 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:31.105505 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:31.105516 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:31.105527 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:31.105538 | orchestrator | 2026-02-17 05:48:31.105549 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 05:48:31.105560 | orchestrator | Tuesday 17 February 2026 05:48:18 +0000 (0:00:02.277) 0:01:33.457 ****** 2026-02-17 05:48:31.105570 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:31.105582 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-17 05:48:31.105593 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 05:48:31.105604 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 05:48:31.105615 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 05:48:31.105627 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 05:48:31.105638 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-17 05:48:31.105648 | orchestrator | 2026-02-17 05:48:31.105658 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 05:48:31.105668 | orchestrator | Tuesday 17 February 2026 05:48:21 +0000 (0:00:03.263) 0:01:36.721 ****** 2026-02-17 05:48:31.105678 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:48:31.105687 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:48:31.105697 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:48:31.105707 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:48:31.105717 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:48:31.105726 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:48:31.105737 | orchestrator | ok: [testbed-manager] 2026-02-17 05:48:31.105746 | orchestrator | 2026-02-17 05:48:31.105756 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 05:48:31.105766 | orchestrator | Tuesday 17 February 2026 05:48:23 +0000 (0:00:02.142) 0:01:38.864 ****** 2026-02-17 05:48:31.105775 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:31.105785 | orchestrator | 2026-02-17 05:48:31.105795 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 05:48:31.105805 | orchestrator | Tuesday 17 February 2026 05:48:24 +0000 (0:00:01.191) 0:01:40.055 ****** 2026-02-17 05:48:31.105815 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:31.105831 | orchestrator | 2026-02-17 05:48:31.105841 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 05:48:31.105851 | orchestrator | Tuesday 17 February 2026 05:48:26 +0000 (0:00:01.307) 0:01:41.363 ****** 2026-02-17 05:48:31.105861 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:31.105871 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:31.105881 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:31.105890 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:31.105900 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:31.105910 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:31.105919 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:31.105930 | orchestrator | 2026-02-17 05:48:31.105940 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 05:48:31.105950 | orchestrator | Tuesday 17 February 2026 05:48:28 +0000 (0:00:02.669) 0:01:44.033 ****** 2026-02-17 05:48:31.105959 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:31.105969 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:31.105979 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:31.105989 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:31.105999 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:31.106008 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:31.106087 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:41.943766 | orchestrator | 2026-02-17 05:48:41.943915 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 05:48:41.943937 | orchestrator | Tuesday 17 February 2026 05:48:31 +0000 (0:00:02.326) 0:01:46.360 ****** 2026-02-17 05:48:41.943949 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:41.943962 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:41.943973 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:41.943984 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:41.943995 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:41.944007 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:41.944018 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:41.944029 | orchestrator | 2026-02-17 05:48:41.944040 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 05:48:41.944052 | orchestrator | Tuesday 17 February 2026 05:48:33 +0000 (0:00:02.146) 0:01:48.507 ****** 2026-02-17 05:48:41.944063 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:41.944075 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:41.944086 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:41.944097 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:41.944108 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:41.944119 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:41.944130 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:41.944141 | orchestrator | 2026-02-17 05:48:41.944152 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 05:48:41.944186 | orchestrator | Tuesday 17 February 2026 05:48:35 +0000 (0:00:01.979) 0:01:50.486 ****** 2026-02-17 05:48:41.944206 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:41.944224 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:41.944244 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:41.944262 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:41.944282 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:41.944296 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:41.944310 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:41.944322 | orchestrator | 2026-02-17 05:48:41.944360 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 05:48:41.944373 | orchestrator | Tuesday 17 February 2026 05:48:37 +0000 (0:00:02.176) 0:01:52.663 ****** 2026-02-17 05:48:41.944386 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:41.944399 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:41.944411 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:41.944424 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:41.944460 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:41.944474 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:41.944486 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:41.944499 | orchestrator | 2026-02-17 05:48:41.944511 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 05:48:41.944525 | orchestrator | Tuesday 17 February 2026 05:48:39 +0000 (0:00:02.103) 0:01:54.766 ****** 2026-02-17 05:48:41.944538 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:41.944550 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:41.944562 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:41.944575 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:41.944588 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:41.944601 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:41.944613 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:41.944626 | orchestrator | 2026-02-17 05:48:41.944637 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 05:48:41.944648 | orchestrator | Tuesday 17 February 2026 05:48:41 +0000 (0:00:02.252) 0:01:57.019 ****** 2026-02-17 05:48:41.944662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:41.944676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:41.944689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:41.944723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:48:41.944737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:41.944750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:41.944789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:41.944815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69a38e66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:41.944836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:41.944867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284254 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:42.284435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:48:42.284541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd83a89d3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:42.284628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284652 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:42.284664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.284698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:48:42.284718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.631835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.631944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.631965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f3163655', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:42.631982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.631994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.632033 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:42.632077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.632108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'uuids': ['b2ca6990-5b39-46e1-9ab9-fa89aec205ee'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP']}})  2026-02-17 05:48:42.632130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce83e4f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:42.632150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f']}})  2026-02-17 05:48:42.632170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.632189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.632209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:48:42.632275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.723250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac', 'dm-uuid-CRYPT-LUKS2-edb3e2e5a632414f8a4f0db6f2dd266c-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 05:48:42.723403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.723434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'uuids': ['edb3e2e5-a632-414f-8a4f-0db6f2dd266c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac']}})  2026-02-17 05:48:42.723455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3']}})  2026-02-17 05:48:42.723474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.723545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d567a40', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:42.723596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.723618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.723636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.723655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'uuids': ['dab48e76-bd26-40e2-b056-8f58a903c67b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL']}})  2026-02-17 05:48:42.723676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP', 'dm-uuid-CRYPT-LUKS2-b2ca69905b3946e19ab9fa89aec205ee-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 05:48:42.723720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd9c05b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:42.903828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b']}})  2026-02-17 05:48:42.903924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.903940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.903953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:48:42.903965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.903977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08', 'dm-uuid-CRYPT-LUKS2-40a19dfb08344771a8e6cfe7009b1e1d-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 05:48:42.904009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.904039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'uuids': ['40a19dfb-0834-4771-a8e6-cfe7009b1e1d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08']}})  2026-02-17 05:48:42.904059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0']}})  2026-02-17 05:48:42.904071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.904083 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:42.904100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '95350bd6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:42.904128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.975793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.975891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL', 'dm-uuid-CRYPT-LUKS2-dab48e76bd2640e2b0568f58a903c67b-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 05:48:42.975908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.975920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'uuids': ['4833064e-8ca1-479d-a0c0-581ea0d1065c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7']}})  2026-02-17 05:48:42.975931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b093f3ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:42.975964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee']}})  2026-02-17 05:48:42.975975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.976009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.976021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:48:42.976032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.976042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV', 'dm-uuid-CRYPT-LUKS2-f004f31e7c734e098d3470dc55158438-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 05:48:42.976052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:42.976069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'uuids': ['f004f31e-7c73-4e09-8d34-70dc55158438'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV']}})  2026-02-17 05:48:42.976079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841']}})  2026-02-17 05:48:42.976097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.284717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37d8f58a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:44.284844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.284862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.284876 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:44.284890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7', 'dm-uuid-CRYPT-LUKS2-4833064e8ca1479da0c0581ea0d1065c-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 05:48:44.284904 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:48:44.284915 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.284952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.284965 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.284976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:48:44.284988 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.285008 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.285020 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.285054 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '214cfdef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:48:44.526547 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.526653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:48:44.526693 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:48:44.526709 | orchestrator | 2026-02-17 05:48:44.526721 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 05:48:44.526733 | orchestrator | Tuesday 17 February 2026 05:48:44 +0000 (0:00:02.515) 0:01:59.534 ****** 2026-02-17 05:48:44.526747 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.526761 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.526772 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.526800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.526832 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.526843 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.526863 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.526883 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69a38e66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.526906 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.597841 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.597974 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598001 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598088 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598122 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598135 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598166 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598188 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598210 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd83a89d3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598224 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.598250 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902077 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:48:44.902166 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902181 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902189 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902198 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902220 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902246 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902271 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902293 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f3163655', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902308 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902329 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:44.902421 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:48:44.902441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'uuids': ['b2ca6990-5b39-46e1-9ab9-fa89aec205ee'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce83e4f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025596 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025610 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:48:45.025629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac', 'dm-uuid-CRYPT-LUKS2-edb3e2e5a632414f8a4f0db6f2dd266c-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'uuids': ['edb3e2e5-a632-414f-8a4f-0db6f2dd266c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.025741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155419 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d567a40', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155495 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'uuids': ['dab48e76-bd26-40e2-b056-8f58a903c67b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP', 'dm-uuid-CRYPT-LUKS2-b2ca69905b3946e19ab9fa89aec205ee-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155562 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd9c05b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.155580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.268772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.268876 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.268911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.268949 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.268963 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:48:45.268977 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08', 'dm-uuid-CRYPT-LUKS2-40a19dfb08344771a8e6cfe7009b1e1d-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.268988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.269021 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'uuids': ['40a19dfb-0834-4771-a8e6-cfe7009b1e1d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.269040 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.269063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.269085 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '95350bd6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518160 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518295 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518312 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'uuids': ['4833064e-8ca1-479d-a0c0-581ea0d1065c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518325 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b093f3ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL', 'dm-uuid-CRYPT-LUKS2-dab48e76bd2640e2b0568f58a903c67b-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518454 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.518475 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:48:45.518496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV', 'dm-uuid-CRYPT-LUKS2-f004f31e7c734e098d3470dc55158438-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.601981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602146 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602172 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'uuids': ['f004f31e-7c73-4e09-8d34-70dc55158438'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602189 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602201 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602230 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602268 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841']}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602282 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602320 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602357 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602368 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:48:45.602403 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37d8f58a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:49:00.535058 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '214cfdef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_214cfdef-2253-4ef6-bb28-2ea2555c75c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:49:00.535207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:49:00.535225 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:49:00.535252 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:49:00.535264 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:49:00.535276 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:49:00.535287 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7', 'dm-uuid-CRYPT-LUKS2-4833064e8ca1479da0c0581ea0d1065c-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:49:00.535307 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:49:00.535317 | orchestrator | 2026-02-17 05:49:00.535328 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 05:49:00.535406 | orchestrator | Tuesday 17 February 2026 05:48:46 +0000 (0:00:02.491) 0:02:02.025 ****** 2026-02-17 05:49:00.535419 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:49:00.535430 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:49:00.535439 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:49:00.535449 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:49:00.535459 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:49:00.535469 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:49:00.535478 | orchestrator | ok: [testbed-manager] 2026-02-17 05:49:00.535488 | orchestrator | 2026-02-17 05:49:00.535498 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 05:49:00.535508 | orchestrator | Tuesday 17 February 2026 05:48:49 +0000 (0:00:02.621) 0:02:04.647 ****** 2026-02-17 05:49:00.535518 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:49:00.535527 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:49:00.535537 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:49:00.535547 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:49:00.535556 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:49:00.535567 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:49:00.535579 | orchestrator | ok: [testbed-manager] 2026-02-17 05:49:00.535590 | orchestrator | 2026-02-17 05:49:00.535602 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 05:49:00.535613 | orchestrator | Tuesday 17 February 2026 05:48:51 +0000 (0:00:01.887) 0:02:06.535 ****** 2026-02-17 05:49:00.535625 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:49:00.535636 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:49:00.535647 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:49:00.535657 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:49:00.535669 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:49:00.535680 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:49:00.535691 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:49:00.535702 | orchestrator | 2026-02-17 05:49:00.535713 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 05:49:00.535724 | orchestrator | Tuesday 17 February 2026 05:48:53 +0000 (0:00:02.504) 0:02:09.039 ****** 2026-02-17 05:49:00.535742 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:49:00.535753 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:49:00.535764 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:49:00.535775 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:00.535786 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:49:00.535797 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:49:00.535808 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:49:00.535820 | orchestrator | 2026-02-17 05:49:00.535831 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 05:49:00.535842 | orchestrator | Tuesday 17 February 2026 05:48:55 +0000 (0:00:02.011) 0:02:11.051 ****** 2026-02-17 05:49:00.535853 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:49:00.535864 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:49:00.535875 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:49:00.535886 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:00.535897 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:49:00.535908 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:49:00.535919 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-17 05:49:00.535930 | orchestrator | 2026-02-17 05:49:00.535941 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 05:49:00.535952 | orchestrator | Tuesday 17 February 2026 05:48:58 +0000 (0:00:02.682) 0:02:13.733 ****** 2026-02-17 05:49:00.535964 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:49:00.535974 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:49:00.535993 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:49:00.536003 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:00.536012 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:49:00.536022 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:49:00.536032 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:49:00.536041 | orchestrator | 2026-02-17 05:49:00.536051 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 05:49:39.036386 | orchestrator | Tuesday 17 February 2026 05:49:00 +0000 (0:00:02.055) 0:02:15.789 ****** 2026-02-17 05:49:39.036519 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-17 05:49:39.036538 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:49:39.036550 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-17 05:49:39.036561 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 05:49:39.036572 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 05:49:39.036584 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-17 05:49:39.036595 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-17 05:49:39.036606 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-17 05:49:39.036617 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 05:49:39.036628 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 05:49:39.036639 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-17 05:49:39.036650 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-17 05:49:39.036661 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-17 05:49:39.036672 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-17 05:49:39.036684 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-17 05:49:39.036694 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-17 05:49:39.036706 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-17 05:49:39.036717 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-17 05:49:39.036728 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-17 05:49:39.036739 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-17 05:49:39.036750 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-17 05:49:39.036761 | orchestrator | 2026-02-17 05:49:39.036773 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 05:49:39.036785 | orchestrator | Tuesday 17 February 2026 05:49:04 +0000 (0:00:03.832) 0:02:19.621 ****** 2026-02-17 05:49:39.036797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 05:49:39.036808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 05:49:39.036819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 05:49:39.036830 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:49:39.036842 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 05:49:39.036853 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 05:49:39.036864 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 05:49:39.036875 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:49:39.036888 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 05:49:39.036901 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 05:49:39.036914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 05:49:39.036927 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:49:39.036940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 05:49:39.036953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 05:49:39.036965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 05:49:39.036978 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 05:49:39.036992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 05:49:39.037029 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 05:49:39.037042 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:39.037055 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 05:49:39.037069 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 05:49:39.037082 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 05:49:39.037094 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:49:39.037107 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:49:39.037120 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-17 05:49:39.037163 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-17 05:49:39.037185 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-17 05:49:39.037198 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:49:39.037211 | orchestrator | 2026-02-17 05:49:39.037224 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 05:49:39.037237 | orchestrator | Tuesday 17 February 2026 05:49:06 +0000 (0:00:02.349) 0:02:21.971 ****** 2026-02-17 05:49:39.037249 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:49:39.037260 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:49:39.037271 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:49:39.037282 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:49:39.037294 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:49:39.037305 | orchestrator | 2026-02-17 05:49:39.037317 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 05:49:39.037329 | orchestrator | Tuesday 17 February 2026 05:49:08 +0000 (0:00:01.967) 0:02:23.939 ****** 2026-02-17 05:49:39.037340 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:39.037375 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:49:39.037388 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:49:39.037399 | orchestrator | 2026-02-17 05:49:39.037410 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 05:49:39.037421 | orchestrator | Tuesday 17 February 2026 05:49:10 +0000 (0:00:01.660) 0:02:25.600 ****** 2026-02-17 05:49:39.037432 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:39.037443 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:49:39.037471 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:49:39.037482 | orchestrator | 2026-02-17 05:49:39.037493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 05:49:39.037504 | orchestrator | Tuesday 17 February 2026 05:49:11 +0000 (0:00:01.473) 0:02:27.073 ****** 2026-02-17 05:49:39.037515 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:39.037527 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:49:39.037538 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:49:39.037549 | orchestrator | 2026-02-17 05:49:39.037560 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 05:49:39.037571 | orchestrator | Tuesday 17 February 2026 05:49:13 +0000 (0:00:01.392) 0:02:28.465 ****** 2026-02-17 05:49:39.037582 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:49:39.037593 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:49:39.037604 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:49:39.037615 | orchestrator | 2026-02-17 05:49:39.037626 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 05:49:39.037637 | orchestrator | Tuesday 17 February 2026 05:49:14 +0000 (0:00:01.456) 0:02:29.922 ****** 2026-02-17 05:49:39.037648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 05:49:39.037659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 05:49:39.037670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 05:49:39.037681 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:39.037701 | orchestrator | 2026-02-17 05:49:39.037712 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 05:49:39.037723 | orchestrator | Tuesday 17 February 2026 05:49:16 +0000 (0:00:01.411) 0:02:31.334 ****** 2026-02-17 05:49:39.037735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 05:49:39.037746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 05:49:39.037757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 05:49:39.037768 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:39.037779 | orchestrator | 2026-02-17 05:49:39.037790 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 05:49:39.037801 | orchestrator | Tuesday 17 February 2026 05:49:17 +0000 (0:00:01.700) 0:02:33.035 ****** 2026-02-17 05:49:39.037812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 05:49:39.037823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 05:49:39.037834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 05:49:39.037845 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:49:39.037856 | orchestrator | 2026-02-17 05:49:39.037867 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 05:49:39.037878 | orchestrator | Tuesday 17 February 2026 05:49:19 +0000 (0:00:01.753) 0:02:34.789 ****** 2026-02-17 05:49:39.037889 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:49:39.037900 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:49:39.037911 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:49:39.037922 | orchestrator | 2026-02-17 05:49:39.037933 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 05:49:39.037944 | orchestrator | Tuesday 17 February 2026 05:49:21 +0000 (0:00:01.741) 0:02:36.530 ****** 2026-02-17 05:49:39.037956 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 05:49:39.037967 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 05:49:39.037978 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 05:49:39.037989 | orchestrator | 2026-02-17 05:49:39.038000 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 05:49:39.038011 | orchestrator | Tuesday 17 February 2026 05:49:22 +0000 (0:00:01.549) 0:02:38.080 ****** 2026-02-17 05:49:39.038079 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:49:39.038091 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:49:39.038104 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:49:39.038115 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 05:49:39.038126 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 05:49:39.038137 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 05:49:39.038148 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 05:49:39.038159 | orchestrator | 2026-02-17 05:49:39.038170 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 05:49:39.038181 | orchestrator | Tuesday 17 February 2026 05:49:24 +0000 (0:00:02.094) 0:02:40.175 ****** 2026-02-17 05:49:39.038192 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:49:39.038203 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:49:39.038214 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:49:39.038225 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 05:49:39.038236 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 05:49:39.038247 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 05:49:39.038265 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 05:49:39.038276 | orchestrator | 2026-02-17 05:49:39.038287 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-17 05:49:39.038298 | orchestrator | Tuesday 17 February 2026 05:49:28 +0000 (0:00:03.159) 0:02:43.334 ****** 2026-02-17 05:49:39.038309 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:49:39.038320 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:49:39.038331 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:49:39.038413 | orchestrator | changed: [testbed-manager] 2026-02-17 05:50:15.594534 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:50:15.594681 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:50:15.594707 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:50:15.594728 | orchestrator | 2026-02-17 05:50:15.594862 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-17 05:50:15.594895 | orchestrator | Tuesday 17 February 2026 05:49:39 +0000 (0:00:10.958) 0:02:54.293 ****** 2026-02-17 05:50:15.594914 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.594934 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.594955 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.594973 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.594993 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.595013 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.595033 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.595053 | orchestrator | 2026-02-17 05:50:15.595073 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-17 05:50:15.595096 | orchestrator | Tuesday 17 February 2026 05:49:41 +0000 (0:00:02.282) 0:02:56.576 ****** 2026-02-17 05:50:15.595116 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.595139 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.595162 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.595183 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.595204 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.595226 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.595248 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.595270 | orchestrator | 2026-02-17 05:50:15.595293 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-17 05:50:15.595314 | orchestrator | Tuesday 17 February 2026 05:49:43 +0000 (0:00:02.045) 0:02:58.621 ****** 2026-02-17 05:50:15.595335 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.595356 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:50:15.595405 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:50:15.595423 | orchestrator | changed: [testbed-node-2] 2026-02-17 05:50:15.595440 | orchestrator | changed: [testbed-node-3] 2026-02-17 05:50:15.595456 | orchestrator | changed: [testbed-node-4] 2026-02-17 05:50:15.595475 | orchestrator | changed: [testbed-node-5] 2026-02-17 05:50:15.595493 | orchestrator | 2026-02-17 05:50:15.595512 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-17 05:50:15.595531 | orchestrator | Tuesday 17 February 2026 05:49:46 +0000 (0:00:03.134) 0:03:01.756 ****** 2026-02-17 05:50:15.595550 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-17 05:50:15.595570 | orchestrator | 2026-02-17 05:50:15.595590 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-17 05:50:15.595608 | orchestrator | Tuesday 17 February 2026 05:49:49 +0000 (0:00:02.913) 0:03:04.669 ****** 2026-02-17 05:50:15.595627 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.595645 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.595664 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.595683 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.595702 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.595720 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.595769 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.595789 | orchestrator | 2026-02-17 05:50:15.595807 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-17 05:50:15.595827 | orchestrator | Tuesday 17 February 2026 05:49:51 +0000 (0:00:01.902) 0:03:06.571 ****** 2026-02-17 05:50:15.595845 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.595864 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.595883 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.595902 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.595921 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.595939 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.595959 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.595977 | orchestrator | 2026-02-17 05:50:15.595994 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-17 05:50:15.596013 | orchestrator | Tuesday 17 February 2026 05:49:53 +0000 (0:00:02.093) 0:03:08.665 ****** 2026-02-17 05:50:15.596031 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.596049 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.596076 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.596095 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.596113 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.596130 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.596147 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.596165 | orchestrator | 2026-02-17 05:50:15.596185 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-17 05:50:15.596204 | orchestrator | Tuesday 17 February 2026 05:49:55 +0000 (0:00:02.031) 0:03:10.697 ****** 2026-02-17 05:50:15.596219 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.596237 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.596254 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.596272 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.596289 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.596308 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.596325 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.596343 | orchestrator | 2026-02-17 05:50:15.596389 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-17 05:50:15.596408 | orchestrator | Tuesday 17 February 2026 05:49:57 +0000 (0:00:02.191) 0:03:12.888 ****** 2026-02-17 05:50:15.596426 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.596447 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.596467 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.596485 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.596503 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.596521 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.596541 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.596559 | orchestrator | 2026-02-17 05:50:15.596579 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-17 05:50:15.596598 | orchestrator | Tuesday 17 February 2026 05:49:59 +0000 (0:00:01.934) 0:03:14.823 ****** 2026-02-17 05:50:15.596647 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.596666 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.596684 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.596702 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.596721 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.596739 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.596757 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.596775 | orchestrator | 2026-02-17 05:50:15.596793 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-17 05:50:15.596811 | orchestrator | Tuesday 17 February 2026 05:50:01 +0000 (0:00:02.170) 0:03:16.993 ****** 2026-02-17 05:50:15.596830 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.596848 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.596865 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.596901 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.596920 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.596938 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.596957 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.596974 | orchestrator | 2026-02-17 05:50:15.596992 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-17 05:50:15.597010 | orchestrator | Tuesday 17 February 2026 05:50:03 +0000 (0:00:02.013) 0:03:19.007 ****** 2026-02-17 05:50:15.597028 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.597046 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.597063 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.597081 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.597099 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.597118 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.597135 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.597155 | orchestrator | 2026-02-17 05:50:15.597173 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-17 05:50:15.597190 | orchestrator | Tuesday 17 February 2026 05:50:06 +0000 (0:00:02.291) 0:03:21.298 ****** 2026-02-17 05:50:15.597208 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.597225 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.597244 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.597261 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.597279 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.597298 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.597316 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.597333 | orchestrator | 2026-02-17 05:50:15.597352 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-17 05:50:15.597396 | orchestrator | Tuesday 17 February 2026 05:50:08 +0000 (0:00:02.181) 0:03:23.480 ****** 2026-02-17 05:50:15.597414 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.597432 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.597449 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.597467 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.597484 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.597502 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.597519 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.597535 | orchestrator | 2026-02-17 05:50:15.597553 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-17 05:50:15.597570 | orchestrator | Tuesday 17 February 2026 05:50:10 +0000 (0:00:02.195) 0:03:25.676 ****** 2026-02-17 05:50:15.597588 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.597605 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.597622 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.597640 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.597657 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.597676 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.597695 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.597714 | orchestrator | 2026-02-17 05:50:15.597732 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-17 05:50:15.597750 | orchestrator | Tuesday 17 February 2026 05:50:12 +0000 (0:00:01.954) 0:03:27.630 ****** 2026-02-17 05:50:15.597768 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.597788 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.597806 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.597826 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.597843 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.597863 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:15.597881 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:15.597900 | orchestrator | 2026-02-17 05:50:15.597931 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-17 05:50:15.597950 | orchestrator | Tuesday 17 February 2026 05:50:14 +0000 (0:00:01.856) 0:03:29.487 ****** 2026-02-17 05:50:15.597982 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:15.598000 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:15.598116 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:15.598141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 05:50:15.598161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 05:50:15.598178 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:15.598196 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 05:50:15.598215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 05:50:15.598233 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:15.598253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 05:50:15.598290 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 05:50:44.893232 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.893344 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:44.893414 | orchestrator | 2026-02-17 05:50:44.893430 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-17 05:50:44.893443 | orchestrator | Tuesday 17 February 2026 05:50:16 +0000 (0:00:02.531) 0:03:32.018 ****** 2026-02-17 05:50:44.893454 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:44.893465 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:44.893476 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:44.893487 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.893499 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.893510 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.893521 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:44.893532 | orchestrator | 2026-02-17 05:50:44.893544 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-17 05:50:44.893556 | orchestrator | Tuesday 17 February 2026 05:50:18 +0000 (0:00:02.059) 0:03:34.078 ****** 2026-02-17 05:50:44.893567 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:44.893578 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:44.893589 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:44.893600 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.893611 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.893622 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.893633 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:44.893643 | orchestrator | 2026-02-17 05:50:44.893655 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-17 05:50:44.893666 | orchestrator | Tuesday 17 February 2026 05:50:21 +0000 (0:00:02.274) 0:03:36.352 ****** 2026-02-17 05:50:44.893677 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:44.893688 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:44.893699 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:44.893710 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.893737 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.893760 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.893773 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:44.893786 | orchestrator | 2026-02-17 05:50:44.893799 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-17 05:50:44.893812 | orchestrator | Tuesday 17 February 2026 05:50:23 +0000 (0:00:01.999) 0:03:38.352 ****** 2026-02-17 05:50:44.893854 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:44.893867 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:44.893879 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:44.893891 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.893904 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.893916 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.893929 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:44.893941 | orchestrator | 2026-02-17 05:50:44.893953 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-17 05:50:44.893964 | orchestrator | Tuesday 17 February 2026 05:50:25 +0000 (0:00:02.479) 0:03:40.832 ****** 2026-02-17 05:50:44.893975 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:44.893986 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:44.893996 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:44.894007 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.894080 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.894095 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.894114 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:44.894132 | orchestrator | 2026-02-17 05:50:44.894150 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-17 05:50:44.894168 | orchestrator | Tuesday 17 February 2026 05:50:27 +0000 (0:00:02.112) 0:03:42.944 ****** 2026-02-17 05:50:44.894194 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:44.894214 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:44.894234 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:44.894254 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.894273 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.894292 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.894309 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:44.894325 | orchestrator | 2026-02-17 05:50:44.894337 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-17 05:50:44.894395 | orchestrator | Tuesday 17 February 2026 05:50:29 +0000 (0:00:01.997) 0:03:44.941 ****** 2026-02-17 05:50:44.894409 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:44.894420 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:44.894431 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:44.894442 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:44.894454 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:50:44.894465 | orchestrator | 2026-02-17 05:50:44.894477 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-17 05:50:44.894488 | orchestrator | Tuesday 17 February 2026 05:50:32 +0000 (0:00:02.543) 0:03:47.485 ****** 2026-02-17 05:50:44.894499 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:50:44.894511 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:50:44.894521 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:50:44.894532 | orchestrator | 2026-02-17 05:50:44.894543 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-17 05:50:44.894554 | orchestrator | Tuesday 17 February 2026 05:50:33 +0000 (0:00:01.477) 0:03:48.962 ****** 2026-02-17 05:50:44.894566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 05:50:44.894578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 05:50:44.894589 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.894600 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 05:50:44.894630 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 05:50:44.894662 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.894689 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 05:50:44.894710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 05:50:44.894725 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.894742 | orchestrator | 2026-02-17 05:50:44.894758 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-17 05:50:44.894775 | orchestrator | Tuesday 17 February 2026 05:50:35 +0000 (0:00:01.453) 0:03:50.416 ****** 2026-02-17 05:50:44.894794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:44.894814 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:44.894832 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.894851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:44.894865 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:44.894876 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.894887 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:44.894899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:44.894917 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.894929 | orchestrator | 2026-02-17 05:50:44.894940 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-17 05:50:44.894951 | orchestrator | Tuesday 17 February 2026 05:50:36 +0000 (0:00:01.695) 0:03:52.111 ****** 2026-02-17 05:50:44.894961 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.894972 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.894983 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.894994 | orchestrator | 2026-02-17 05:50:44.895004 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-17 05:50:44.895015 | orchestrator | Tuesday 17 February 2026 05:50:38 +0000 (0:00:01.481) 0:03:53.592 ****** 2026-02-17 05:50:44.895026 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.895037 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.895047 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.895058 | orchestrator | 2026-02-17 05:50:44.895078 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-17 05:50:44.895089 | orchestrator | Tuesday 17 February 2026 05:50:39 +0000 (0:00:01.368) 0:03:54.961 ****** 2026-02-17 05:50:44.895100 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.895111 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.895122 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.895133 | orchestrator | 2026-02-17 05:50:44.895143 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-17 05:50:44.895154 | orchestrator | Tuesday 17 February 2026 05:50:41 +0000 (0:00:01.546) 0:03:56.507 ****** 2026-02-17 05:50:44.895165 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:44.895176 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:44.895187 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:44.895197 | orchestrator | 2026-02-17 05:50:44.895208 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-17 05:50:44.895219 | orchestrator | Tuesday 17 February 2026 05:50:42 +0000 (0:00:01.404) 0:03:57.912 ****** 2026-02-17 05:50:44.895240 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}) 2026-02-17 05:50:46.454293 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}) 2026-02-17 05:50:46.454443 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}) 2026-02-17 05:50:46.454461 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}) 2026-02-17 05:50:46.454473 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}) 2026-02-17 05:50:46.454484 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}) 2026-02-17 05:50:46.454495 | orchestrator | 2026-02-17 05:50:46.454508 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-17 05:50:46.454520 | orchestrator | Tuesday 17 February 2026 05:50:44 +0000 (0:00:02.233) 0:04:00.145 ****** 2026-02-17 05:50:46.454537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-366ad200-d272-50e2-9bbd-3174591b235f/osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1771299906.943081, 'mtime': 1771299906.9370809, 'ctime': 1771299906.9370809, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-366ad200-d272-50e2-9bbd-3174591b235f/osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:46.454572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3/osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1771299927.5313904, 'mtime': 1771299927.5263903, 'ctime': 1771299927.5263903, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3/osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:46.454607 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:46.454640 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b/osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1771299904.9841657, 'mtime': 1771299904.9791656, 'ctime': 1771299904.9791656, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b/osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:46.454653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0/osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1771299923.425447, 'mtime': 1771299923.422447, 'ctime': 1771299923.422447, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0/osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:46.454665 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:46.454683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-415e7a1a-a305-5338-824f-e9750ca5ebee/osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 954, 'dev': 6, 'nlink': 1, 'atime': 1771299904.9195206, 'mtime': 1771299904.9135206, 'ctime': 1771299904.9135206, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-415e7a1a-a305-5338-824f-e9750ca5ebee/osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:46.454712 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841/osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 964, 'dev': 6, 'nlink': 1, 'atime': 1771299923.563805, 'mtime': 1771299923.558805, 'ctime': 1771299923.558805, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841/osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.553806 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:57.553924 | orchestrator | 2026-02-17 05:50:57.553942 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-17 05:50:57.553956 | orchestrator | Tuesday 17 February 2026 05:50:46 +0000 (0:00:01.569) 0:04:01.715 ****** 2026-02-17 05:50:57.553969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 05:50:57.553983 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 05:50:57.553995 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:57.554007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 05:50:57.554087 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 05:50:57.554100 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:57.554111 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 05:50:57.554123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 05:50:57.554174 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:57.554186 | orchestrator | 2026-02-17 05:50:57.554209 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-17 05:50:57.554222 | orchestrator | Tuesday 17 February 2026 05:50:47 +0000 (0:00:01.409) 0:04:03.124 ****** 2026-02-17 05:50:57.554235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554264 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554275 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:57.554286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554309 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:57.554320 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554332 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554345 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:57.554410 | orchestrator | 2026-02-17 05:50:57.554426 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-17 05:50:57.554440 | orchestrator | Tuesday 17 February 2026 05:50:49 +0000 (0:00:01.480) 0:04:04.605 ****** 2026-02-17 05:50:57.554453 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'})  2026-02-17 05:50:57.554466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'})  2026-02-17 05:50:57.554480 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:57.554512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'})  2026-02-17 05:50:57.554526 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'})  2026-02-17 05:50:57.554539 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:57.554552 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'})  2026-02-17 05:50:57.554565 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'})  2026-02-17 05:50:57.554588 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:57.554601 | orchestrator | 2026-02-17 05:50:57.554614 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-17 05:50:57.554628 | orchestrator | Tuesday 17 February 2026 05:50:51 +0000 (0:00:01.857) 0:04:06.463 ****** 2026-02-17 05:50:57.554641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-366ad200-d272-50e2-9bbd-3174591b235f', 'data_vg': 'ceph-366ad200-d272-50e2-9bbd-3174591b235f'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3', 'data_vg': 'ceph-c478ad6b-fe8a-5fdf-805d-21e03f23f5d3'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554668 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:57.554680 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b', 'data_vg': 'ceph-33b7cf65-698e-5092-b1e1-7b58bfaeaf8b'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554700 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8aff4da6-f81a-563d-a807-caa30e1cb6b0', 'data_vg': 'ceph-8aff4da6-f81a-563d-a807-caa30e1cb6b0'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554711 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:57.554723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-415e7a1a-a305-5338-824f-e9750ca5ebee', 'data_vg': 'ceph-415e7a1a-a305-5338-824f-e9750ca5ebee'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554734 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-67fd3cab-24d5-5329-b459-0f3a5a04c841', 'data_vg': 'ceph-67fd3cab-24d5-5329-b459-0f3a5a04c841'}, 'ansible_loop_var': 'item'})  2026-02-17 05:50:57.554746 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:57.554757 | orchestrator | 2026-02-17 05:50:57.554768 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-17 05:50:57.554779 | orchestrator | Tuesday 17 February 2026 05:50:52 +0000 (0:00:01.428) 0:04:07.891 ****** 2026-02-17 05:50:57.554790 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:57.554801 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:57.554812 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:57.554823 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:50:57.554834 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:50:57.554845 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:50:57.554856 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:57.554866 | orchestrator | 2026-02-17 05:50:57.554877 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-17 05:50:57.554889 | orchestrator | Tuesday 17 February 2026 05:50:54 +0000 (0:00:02.005) 0:04:09.897 ****** 2026-02-17 05:50:57.554900 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:50:57.554911 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:50:57.554922 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:50:57.554933 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:50:57.554944 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 05:50:57.554962 | orchestrator | 2026-02-17 05:50:57.554973 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-17 05:50:57.554984 | orchestrator | Tuesday 17 February 2026 05:50:57 +0000 (0:00:02.801) 0:04:12.698 ****** 2026-02-17 05:50:57.555004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094736 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:09.094744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094778 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:09.094784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094836 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:09.094843 | orchestrator | 2026-02-17 05:51:09.094851 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-17 05:51:09.094859 | orchestrator | Tuesday 17 February 2026 05:50:58 +0000 (0:00:01.468) 0:04:14.167 ****** 2026-02-17 05:51:09.094866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094921 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:09.094928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094961 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:09.094968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.094981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095017 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:09.095024 | orchestrator | 2026-02-17 05:51:09.095031 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-17 05:51:09.095038 | orchestrator | Tuesday 17 February 2026 05:51:00 +0000 (0:00:01.826) 0:04:15.994 ****** 2026-02-17 05:51:09.095045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095078 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:09.095085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095126 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:09.095134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 05:51:09.095179 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:09.095187 | orchestrator | 2026-02-17 05:51:09.095195 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-17 05:51:09.095203 | orchestrator | Tuesday 17 February 2026 05:51:02 +0000 (0:00:01.504) 0:04:17.498 ****** 2026-02-17 05:51:09.095211 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:09.095219 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:09.095226 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:09.095234 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:09.095242 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:09.095249 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:09.095257 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:09.095265 | orchestrator | 2026-02-17 05:51:09.095273 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-17 05:51:09.095280 | orchestrator | Tuesday 17 February 2026 05:51:04 +0000 (0:00:02.239) 0:04:19.738 ****** 2026-02-17 05:51:09.095288 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:09.095296 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:09.095304 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:09.095311 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:09.095319 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:09.095327 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:09.095334 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:09.095342 | orchestrator | 2026-02-17 05:51:09.095350 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-17 05:51:09.095383 | orchestrator | Tuesday 17 February 2026 05:51:06 +0000 (0:00:02.237) 0:04:21.975 ****** 2026-02-17 05:51:09.095391 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:09.095399 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:09.095407 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:09.095414 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:09.095421 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:09.095428 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:09.095435 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:09.095443 | orchestrator | 2026-02-17 05:51:09.095450 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-17 05:51:09.095458 | orchestrator | Tuesday 17 February 2026 05:51:08 +0000 (0:00:02.145) 0:04:24.121 ****** 2026-02-17 05:51:09.095469 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:21.117405 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:21.117528 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:21.117545 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:21.117557 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:21.117568 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:21.117580 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:21.117591 | orchestrator | 2026-02-17 05:51:21.117604 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-17 05:51:21.117616 | orchestrator | Tuesday 17 February 2026 05:51:11 +0000 (0:00:02.650) 0:04:26.771 ****** 2026-02-17 05:51:21.117628 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:21.117640 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:21.117651 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:21.117688 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:21.117700 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:21.117711 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:21.117722 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:21.117733 | orchestrator | 2026-02-17 05:51:21.117744 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-17 05:51:21.117755 | orchestrator | Tuesday 17 February 2026 05:51:13 +0000 (0:00:02.349) 0:04:29.121 ****** 2026-02-17 05:51:21.117766 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:21.117777 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:21.117788 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:21.117799 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:21.117810 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:21.117821 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:21.117832 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:21.117842 | orchestrator | 2026-02-17 05:51:21.117853 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-17 05:51:21.117865 | orchestrator | Tuesday 17 February 2026 05:51:16 +0000 (0:00:02.514) 0:04:31.635 ****** 2026-02-17 05:51:21.117876 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:21.117886 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:21.117897 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:21.117911 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:21.117923 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:21.117935 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:21.117947 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:21.117959 | orchestrator | 2026-02-17 05:51:21.117971 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-17 05:51:21.117984 | orchestrator | Tuesday 17 February 2026 05:51:18 +0000 (0:00:02.304) 0:04:33.940 ****** 2026-02-17 05:51:21.118013 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:21.118087 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:21.118103 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:21.118118 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:21.118131 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:21.118156 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:21.118169 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:21.118181 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:21.118193 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:21.118206 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:21.118218 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:21.118240 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:21.118254 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:21.118267 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:21.118297 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:21.118309 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:21.118319 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:21.118330 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:21.118341 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:21.118352 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:21.118423 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:21.118441 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:21.118460 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:21.118471 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:21.118482 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:21.118500 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:21.118512 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:21.118523 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:21.118534 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:21.118544 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:21.118555 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:21.118574 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:21.118585 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:21.118596 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:21.118607 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:21.118618 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:21.118629 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:21.118640 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:21.118659 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:26.073971 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:26.074122 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:26.074140 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:26.074153 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:26.074164 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:26.074174 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:26.074184 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:26.074195 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:26.074205 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:26.074214 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:26.074224 | orchestrator | 2026-02-17 05:51:26.074235 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-17 05:51:26.074246 | orchestrator | Tuesday 17 February 2026 05:51:21 +0000 (0:00:02.429) 0:04:36.370 ****** 2026-02-17 05:51:26.074255 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:26.074265 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:26.074292 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:26.074302 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:26.074312 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:51:26.074321 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:51:26.074331 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:26.074341 | orchestrator | 2026-02-17 05:51:26.074351 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-17 05:51:26.074453 | orchestrator | Tuesday 17 February 2026 05:51:23 +0000 (0:00:02.517) 0:04:38.888 ****** 2026-02-17 05:51:26.074466 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:26.074476 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:26.074486 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:26.074497 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:26.074507 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:26.074519 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:26.074530 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:51:26.074541 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:26.074553 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:26.074564 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:26.074575 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:26.074604 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:26.074617 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:26.074628 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:51:26.074639 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:26.074650 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:26.074661 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:26.074672 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:26.074683 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:26.074694 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:26.074713 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:51:26.074725 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:26.074742 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:26.074766 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:26.074789 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:26.074801 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:26.074812 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:26.074824 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:51:26.074835 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:26.074846 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:51:26.074858 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:26.074869 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:51:26.074881 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:51:26.074892 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:51:26.074903 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:51:26.074913 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:51:26.074923 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:51:26.074939 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-17 05:52:07.620893 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:52:07.621004 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:52:07.621020 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-17 05:52:07.621058 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:52:07.621072 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-17 05:52:07.621084 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-17 05:52:07.621095 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:52:07.621107 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:52:07.621134 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-17 05:52:07.621146 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-17 05:52:07.621157 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:52:07.621168 | orchestrator | 2026-02-17 05:52:07.621180 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-17 05:52:07.621193 | orchestrator | Tuesday 17 February 2026 05:51:26 +0000 (0:00:02.443) 0:04:41.331 ****** 2026-02-17 05:52:07.621204 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:07.621215 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:52:07.621226 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:52:07.621237 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:52:07.621247 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:52:07.621258 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:52:07.621269 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:52:07.621280 | orchestrator | 2026-02-17 05:52:07.621291 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-17 05:52:07.621302 | orchestrator | Tuesday 17 February 2026 05:51:28 +0000 (0:00:02.296) 0:04:43.628 ****** 2026-02-17 05:52:07.621316 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:07.621334 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:52:07.621380 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:52:07.621403 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:52:07.621430 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:52:07.621447 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:52:07.621465 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:52:07.621482 | orchestrator | 2026-02-17 05:52:07.621500 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-17 05:52:07.621517 | orchestrator | Tuesday 17 February 2026 05:51:30 +0000 (0:00:02.401) 0:04:46.029 ****** 2026-02-17 05:52:07.621536 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:07.621554 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:52:07.621572 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:52:07.621591 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:52:07.621610 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:52:07.621629 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:52:07.621647 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:52:07.621664 | orchestrator | 2026-02-17 05:52:07.621684 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-17 05:52:07.621703 | orchestrator | Tuesday 17 February 2026 05:51:33 +0000 (0:00:02.449) 0:04:48.479 ****** 2026-02-17 05:52:07.621722 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-17 05:52:07.621757 | orchestrator | 2026-02-17 05:52:07.621776 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-17 05:52:07.621794 | orchestrator | Tuesday 17 February 2026 05:51:36 +0000 (0:00:02.819) 0:04:51.299 ****** 2026-02-17 05:52:07.621812 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-17 05:52:07.621825 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-17 05:52:07.621836 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-17 05:52:07.621847 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-17 05:52:07.621858 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-17 05:52:07.621890 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-17 05:52:07.621902 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-17 05:52:07.621913 | orchestrator | 2026-02-17 05:52:07.621924 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-17 05:52:07.621935 | orchestrator | Tuesday 17 February 2026 05:51:38 +0000 (0:00:02.452) 0:04:53.751 ****** 2026-02-17 05:52:07.621946 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:07.621956 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:52:07.621967 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:52:07.621978 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:52:07.621989 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:52:07.622000 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:52:07.622010 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:52:07.622080 | orchestrator | 2026-02-17 05:52:07.622092 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-17 05:52:07.622103 | orchestrator | Tuesday 17 February 2026 05:51:40 +0000 (0:00:02.446) 0:04:56.198 ****** 2026-02-17 05:52:07.622114 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:07.622125 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:52:07.622136 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:52:07.622147 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:52:07.622158 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:52:07.622168 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:52:07.622179 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:52:07.622190 | orchestrator | 2026-02-17 05:52:07.622201 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-17 05:52:07.622212 | orchestrator | Tuesday 17 February 2026 05:51:42 +0000 (0:00:01.969) 0:04:58.167 ****** 2026-02-17 05:52:07.622223 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:52:07.622235 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:07.622245 | orchestrator | ok: [testbed-node-2] 2026-02-17 05:52:07.622256 | orchestrator | ok: [testbed-node-3] 2026-02-17 05:52:07.622267 | orchestrator | ok: [testbed-node-4] 2026-02-17 05:52:07.622278 | orchestrator | ok: [testbed-node-5] 2026-02-17 05:52:07.622289 | orchestrator | ok: [testbed-manager] 2026-02-17 05:52:07.622299 | orchestrator | 2026-02-17 05:52:07.622310 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-17 05:52:07.622329 | orchestrator | Tuesday 17 February 2026 05:51:45 +0000 (0:00:02.846) 0:05:01.014 ****** 2026-02-17 05:52:07.622340 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:07.622352 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:52:07.622423 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:52:07.622442 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:52:07.622460 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:52:07.622478 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:52:07.622498 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:52:07.622516 | orchestrator | 2026-02-17 05:52:07.622535 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-17 05:52:07.622567 | orchestrator | Tuesday 17 February 2026 05:51:48 +0000 (0:00:02.390) 0:05:03.405 ****** 2026-02-17 05:52:07.622584 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:07.622604 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:52:07.622628 | orchestrator | skipping: [testbed-node-2] 2026-02-17 05:52:07.622646 | orchestrator | skipping: [testbed-node-3] 2026-02-17 05:52:07.622665 | orchestrator | skipping: [testbed-node-4] 2026-02-17 05:52:07.622683 | orchestrator | skipping: [testbed-node-5] 2026-02-17 05:52:07.622701 | orchestrator | skipping: [testbed-manager] 2026-02-17 05:52:07.622719 | orchestrator | 2026-02-17 05:52:07.622737 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-17 05:52:07.622756 | orchestrator | Tuesday 17 February 2026 05:51:50 +0000 (0:00:02.436) 0:05:05.842 ****** 2026-02-17 05:52:07.622774 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:07.622792 | orchestrator | 2026-02-17 05:52:07.622811 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-17 05:52:07.622829 | orchestrator | Tuesday 17 February 2026 05:51:53 +0000 (0:00:02.687) 0:05:08.529 ****** 2026-02-17 05:52:07.622846 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:07.622863 | orchestrator | 2026-02-17 05:52:07.622883 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-17 05:52:07.622901 | orchestrator | 2026-02-17 05:52:07.622920 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 05:52:07.622936 | orchestrator | Tuesday 17 February 2026 05:51:54 +0000 (0:00:01.574) 0:05:10.103 ****** 2026-02-17 05:52:07.622954 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:07.622970 | orchestrator | 2026-02-17 05:52:07.622987 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 05:52:07.623005 | orchestrator | Tuesday 17 February 2026 05:51:56 +0000 (0:00:01.526) 0:05:11.630 ****** 2026-02-17 05:52:07.623022 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:07.623039 | orchestrator | 2026-02-17 05:52:07.623056 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-17 05:52:07.623072 | orchestrator | Tuesday 17 February 2026 05:51:57 +0000 (0:00:01.153) 0:05:12.784 ****** 2026-02-17 05:52:07.623095 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-17 05:52:07.623117 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-17 05:52:07.623152 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-17 05:52:36.173312 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-17 05:52:36.173472 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-17 05:52:36.173514 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}])  2026-02-17 05:52:36.173529 | orchestrator | 2026-02-17 05:52:36.173557 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-17 05:52:36.173569 | orchestrator | 2026-02-17 05:52:36.173580 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-17 05:52:36.173592 | orchestrator | Tuesday 17 February 2026 05:52:07 +0000 (0:00:10.096) 0:05:22.880 ****** 2026-02-17 05:52:36.173603 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.173615 | orchestrator | 2026-02-17 05:52:36.173625 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-17 05:52:36.173636 | orchestrator | Tuesday 17 February 2026 05:52:09 +0000 (0:00:01.532) 0:05:24.413 ****** 2026-02-17 05:52:36.173647 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.173658 | orchestrator | 2026-02-17 05:52:36.173669 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-17 05:52:36.173679 | orchestrator | Tuesday 17 February 2026 05:52:10 +0000 (0:00:01.199) 0:05:25.612 ****** 2026-02-17 05:52:36.173690 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:36.173702 | orchestrator | 2026-02-17 05:52:36.173713 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-17 05:52:36.173724 | orchestrator | Tuesday 17 February 2026 05:52:11 +0000 (0:00:01.135) 0:05:26.747 ****** 2026-02-17 05:52:36.173735 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.173745 | orchestrator | 2026-02-17 05:52:36.173756 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 05:52:36.173767 | orchestrator | Tuesday 17 February 2026 05:52:12 +0000 (0:00:01.132) 0:05:27.880 ****** 2026-02-17 05:52:36.173778 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-17 05:52:36.173789 | orchestrator | 2026-02-17 05:52:36.173800 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 05:52:36.173811 | orchestrator | Tuesday 17 February 2026 05:52:13 +0000 (0:00:01.226) 0:05:29.107 ****** 2026-02-17 05:52:36.173822 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.173835 | orchestrator | 2026-02-17 05:52:36.173848 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 05:52:36.173861 | orchestrator | Tuesday 17 February 2026 05:52:15 +0000 (0:00:01.588) 0:05:30.696 ****** 2026-02-17 05:52:36.173873 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.173885 | orchestrator | 2026-02-17 05:52:36.173897 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 05:52:36.173909 | orchestrator | Tuesday 17 February 2026 05:52:16 +0000 (0:00:01.129) 0:05:31.825 ****** 2026-02-17 05:52:36.173922 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.173933 | orchestrator | 2026-02-17 05:52:36.173946 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 05:52:36.173958 | orchestrator | Tuesday 17 February 2026 05:52:18 +0000 (0:00:01.640) 0:05:33.466 ****** 2026-02-17 05:52:36.173970 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.173982 | orchestrator | 2026-02-17 05:52:36.173994 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 05:52:36.174006 | orchestrator | Tuesday 17 February 2026 05:52:19 +0000 (0:00:01.151) 0:05:34.617 ****** 2026-02-17 05:52:36.174080 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.174093 | orchestrator | 2026-02-17 05:52:36.174106 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 05:52:36.174127 | orchestrator | Tuesday 17 February 2026 05:52:20 +0000 (0:00:01.204) 0:05:35.822 ****** 2026-02-17 05:52:36.174140 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.174152 | orchestrator | 2026-02-17 05:52:36.174164 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 05:52:36.174177 | orchestrator | Tuesday 17 February 2026 05:52:21 +0000 (0:00:01.209) 0:05:37.032 ****** 2026-02-17 05:52:36.174189 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:36.174200 | orchestrator | 2026-02-17 05:52:36.174211 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 05:52:36.174222 | orchestrator | Tuesday 17 February 2026 05:52:22 +0000 (0:00:01.136) 0:05:38.168 ****** 2026-02-17 05:52:36.174233 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.174244 | orchestrator | 2026-02-17 05:52:36.174268 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 05:52:36.174280 | orchestrator | Tuesday 17 February 2026 05:52:24 +0000 (0:00:01.155) 0:05:39.324 ****** 2026-02-17 05:52:36.174302 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:52:36.174332 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:52:36.174343 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:52:36.174375 | orchestrator | 2026-02-17 05:52:36.174387 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 05:52:36.174398 | orchestrator | Tuesday 17 February 2026 05:52:25 +0000 (0:00:01.746) 0:05:41.071 ****** 2026-02-17 05:52:36.174409 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:36.174420 | orchestrator | 2026-02-17 05:52:36.174431 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 05:52:36.174442 | orchestrator | Tuesday 17 February 2026 05:52:27 +0000 (0:00:01.295) 0:05:42.367 ****** 2026-02-17 05:52:36.174453 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:52:36.174464 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:52:36.174474 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:52:36.174485 | orchestrator | 2026-02-17 05:52:36.174496 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 05:52:36.174507 | orchestrator | Tuesday 17 February 2026 05:52:30 +0000 (0:00:03.266) 0:05:45.633 ****** 2026-02-17 05:52:36.174517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 05:52:36.174529 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 05:52:36.174540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 05:52:36.174551 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:36.174562 | orchestrator | 2026-02-17 05:52:36.174579 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 05:52:36.174590 | orchestrator | Tuesday 17 February 2026 05:52:31 +0000 (0:00:01.446) 0:05:47.080 ****** 2026-02-17 05:52:36.174603 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 05:52:36.174616 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 05:52:36.174627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 05:52:36.174639 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:36.174650 | orchestrator | 2026-02-17 05:52:36.174661 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 05:52:36.174679 | orchestrator | Tuesday 17 February 2026 05:52:33 +0000 (0:00:01.916) 0:05:48.996 ****** 2026-02-17 05:52:36.174691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:36.174706 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:36.174717 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:36.174728 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:36.174752 | orchestrator | 2026-02-17 05:52:36.174774 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 05:52:36.174785 | orchestrator | Tuesday 17 February 2026 05:52:34 +0000 (0:00:01.192) 0:05:50.189 ****** 2026-02-17 05:52:36.174805 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6b2dae68d29f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 05:52:27.595607', 'end': '2026-02-17 05:52:27.636309', 'delta': '0:00:00.040702', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b2dae68d29f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 05:52:55.300940 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5939893342f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 05:52:28.180815', 'end': '2026-02-17 05:52:28.238938', 'delta': '0:00:00.058123', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5939893342f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 05:52:55.301058 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4f72f9ce519e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 05:52:29.018200', 'end': '2026-02-17 05:52:29.068517', 'delta': '0:00:00.050317', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f72f9ce519e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 05:52:55.301095 | orchestrator | 2026-02-17 05:52:55.301107 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 05:52:55.301117 | orchestrator | Tuesday 17 February 2026 05:52:36 +0000 (0:00:01.242) 0:05:51.431 ****** 2026-02-17 05:52:55.301126 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:55.301137 | orchestrator | 2026-02-17 05:52:55.301146 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 05:52:55.301155 | orchestrator | Tuesday 17 February 2026 05:52:37 +0000 (0:00:01.613) 0:05:53.045 ****** 2026-02-17 05:52:55.301163 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.301173 | orchestrator | 2026-02-17 05:52:55.301183 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 05:52:55.301191 | orchestrator | Tuesday 17 February 2026 05:52:39 +0000 (0:00:01.271) 0:05:54.316 ****** 2026-02-17 05:52:55.301200 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:55.301209 | orchestrator | 2026-02-17 05:52:55.301217 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 05:52:55.301226 | orchestrator | Tuesday 17 February 2026 05:52:40 +0000 (0:00:01.130) 0:05:55.447 ****** 2026-02-17 05:52:55.301235 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-17 05:52:55.301243 | orchestrator | 2026-02-17 05:52:55.301252 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 05:52:55.301260 | orchestrator | Tuesday 17 February 2026 05:52:42 +0000 (0:00:02.113) 0:05:57.560 ****** 2026-02-17 05:52:55.301269 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:52:55.301278 | orchestrator | 2026-02-17 05:52:55.301286 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 05:52:55.301295 | orchestrator | Tuesday 17 February 2026 05:52:43 +0000 (0:00:01.161) 0:05:58.721 ****** 2026-02-17 05:52:55.301303 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.301312 | orchestrator | 2026-02-17 05:52:55.301321 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 05:52:55.301329 | orchestrator | Tuesday 17 February 2026 05:52:44 +0000 (0:00:01.205) 0:05:59.927 ****** 2026-02-17 05:52:55.301338 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.301346 | orchestrator | 2026-02-17 05:52:55.301856 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 05:52:55.301879 | orchestrator | Tuesday 17 February 2026 05:52:45 +0000 (0:00:01.266) 0:06:01.194 ****** 2026-02-17 05:52:55.301890 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.301900 | orchestrator | 2026-02-17 05:52:55.301909 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 05:52:55.301918 | orchestrator | Tuesday 17 February 2026 05:52:47 +0000 (0:00:01.133) 0:06:02.327 ****** 2026-02-17 05:52:55.301927 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.301935 | orchestrator | 2026-02-17 05:52:55.301945 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 05:52:55.301954 | orchestrator | Tuesday 17 February 2026 05:52:48 +0000 (0:00:01.179) 0:06:03.507 ****** 2026-02-17 05:52:55.301962 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.301971 | orchestrator | 2026-02-17 05:52:55.301980 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 05:52:55.301989 | orchestrator | Tuesday 17 February 2026 05:52:49 +0000 (0:00:01.156) 0:06:04.663 ****** 2026-02-17 05:52:55.301997 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.302006 | orchestrator | 2026-02-17 05:52:55.302068 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 05:52:55.302081 | orchestrator | Tuesday 17 February 2026 05:52:50 +0000 (0:00:01.106) 0:06:05.770 ****** 2026-02-17 05:52:55.302089 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.302098 | orchestrator | 2026-02-17 05:52:55.302107 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 05:52:55.302138 | orchestrator | Tuesday 17 February 2026 05:52:51 +0000 (0:00:01.175) 0:06:06.945 ****** 2026-02-17 05:52:55.302164 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.302173 | orchestrator | 2026-02-17 05:52:55.302182 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 05:52:55.302192 | orchestrator | Tuesday 17 February 2026 05:52:52 +0000 (0:00:01.134) 0:06:08.080 ****** 2026-02-17 05:52:55.302200 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:55.302209 | orchestrator | 2026-02-17 05:52:55.302218 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 05:52:55.302227 | orchestrator | Tuesday 17 February 2026 05:52:53 +0000 (0:00:01.156) 0:06:09.237 ****** 2026-02-17 05:52:55.302246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:52:55.302258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:52:55.302268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:52:55.302279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:52:55.302290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:52:55.302299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:52:55.302308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:52:55.302336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69a38e66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:52:56.542964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:52:56.543056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:52:56.543069 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:52:56.543078 | orchestrator | 2026-02-17 05:52:56.543086 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 05:52:56.543094 | orchestrator | Tuesday 17 February 2026 05:52:55 +0000 (0:00:01.314) 0:06:10.551 ****** 2026-02-17 05:52:56.543103 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:56.543113 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:56.543142 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:56.543163 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:56.543188 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:56.543195 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:56.543202 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:56.543216 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69a38e66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:52:56.543239 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:53:51.238739 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:53:51.238825 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.238834 | orchestrator | 2026-02-17 05:53:51.238841 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 05:53:51.238848 | orchestrator | Tuesday 17 February 2026 05:52:56 +0000 (0:00:01.253) 0:06:11.805 ****** 2026-02-17 05:53:51.238853 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:53:51.238859 | orchestrator | 2026-02-17 05:53:51.238865 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 05:53:51.238870 | orchestrator | Tuesday 17 February 2026 05:52:58 +0000 (0:00:01.559) 0:06:13.365 ****** 2026-02-17 05:53:51.238891 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:53:51.238896 | orchestrator | 2026-02-17 05:53:51.238902 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 05:53:51.238908 | orchestrator | Tuesday 17 February 2026 05:52:59 +0000 (0:00:01.230) 0:06:14.595 ****** 2026-02-17 05:53:51.238913 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:53:51.238918 | orchestrator | 2026-02-17 05:53:51.238923 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 05:53:51.238928 | orchestrator | Tuesday 17 February 2026 05:53:00 +0000 (0:00:01.620) 0:06:16.216 ****** 2026-02-17 05:53:51.238934 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.238939 | orchestrator | 2026-02-17 05:53:51.238944 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 05:53:51.238949 | orchestrator | Tuesday 17 February 2026 05:53:02 +0000 (0:00:01.127) 0:06:17.343 ****** 2026-02-17 05:53:51.238955 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.238960 | orchestrator | 2026-02-17 05:53:51.238965 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 05:53:51.238970 | orchestrator | Tuesday 17 February 2026 05:53:03 +0000 (0:00:01.265) 0:06:18.609 ****** 2026-02-17 05:53:51.238975 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.238981 | orchestrator | 2026-02-17 05:53:51.238986 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 05:53:51.238991 | orchestrator | Tuesday 17 February 2026 05:53:04 +0000 (0:00:01.176) 0:06:19.785 ****** 2026-02-17 05:53:51.238997 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:53:51.239002 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 05:53:51.239007 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 05:53:51.239012 | orchestrator | 2026-02-17 05:53:51.239017 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 05:53:51.239023 | orchestrator | Tuesday 17 February 2026 05:53:06 +0000 (0:00:02.015) 0:06:21.801 ****** 2026-02-17 05:53:51.239028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 05:53:51.239034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 05:53:51.239039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 05:53:51.239044 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.239049 | orchestrator | 2026-02-17 05:53:51.239054 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 05:53:51.239059 | orchestrator | Tuesday 17 February 2026 05:53:07 +0000 (0:00:01.198) 0:06:22.999 ****** 2026-02-17 05:53:51.239065 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.239070 | orchestrator | 2026-02-17 05:53:51.239075 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 05:53:51.239080 | orchestrator | Tuesday 17 February 2026 05:53:08 +0000 (0:00:01.154) 0:06:24.153 ****** 2026-02-17 05:53:51.239086 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:53:51.239102 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:53:51.239108 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:53:51.239113 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 05:53:51.239119 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 05:53:51.239124 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 05:53:51.239129 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 05:53:51.239134 | orchestrator | 2026-02-17 05:53:51.239139 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 05:53:51.239144 | orchestrator | Tuesday 17 February 2026 05:53:11 +0000 (0:00:02.177) 0:06:26.331 ****** 2026-02-17 05:53:51.239150 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:53:51.239159 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:53:51.239164 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:53:51.239169 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 05:53:51.239184 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 05:53:51.239190 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 05:53:51.239195 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 05:53:51.239200 | orchestrator | 2026-02-17 05:53:51.239206 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-17 05:53:51.239211 | orchestrator | Tuesday 17 February 2026 05:53:13 +0000 (0:00:02.931) 0:06:29.263 ****** 2026-02-17 05:53:51.239216 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-17 05:53:51.239221 | orchestrator | 2026-02-17 05:53:51.239226 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-17 05:53:51.239231 | orchestrator | Tuesday 17 February 2026 05:53:16 +0000 (0:00:02.182) 0:06:31.445 ****** 2026-02-17 05:53:51.239237 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.239242 | orchestrator | 2026-02-17 05:53:51.239247 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-17 05:53:51.239252 | orchestrator | Tuesday 17 February 2026 05:53:17 +0000 (0:00:01.271) 0:06:32.716 ****** 2026-02-17 05:53:51.239257 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.239262 | orchestrator | 2026-02-17 05:53:51.239267 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-17 05:53:51.239273 | orchestrator | Tuesday 17 February 2026 05:53:18 +0000 (0:00:01.187) 0:06:33.904 ****** 2026-02-17 05:53:51.239278 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-17 05:53:51.239283 | orchestrator | 2026-02-17 05:53:51.239288 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-17 05:53:51.239293 | orchestrator | Tuesday 17 February 2026 05:53:20 +0000 (0:00:02.279) 0:06:36.183 ****** 2026-02-17 05:53:51.239299 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.239304 | orchestrator | 2026-02-17 05:53:51.239310 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-17 05:53:51.239316 | orchestrator | Tuesday 17 February 2026 05:53:22 +0000 (0:00:01.192) 0:06:37.376 ****** 2026-02-17 05:53:51.239322 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:53:51.239328 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 05:53:51.239334 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:53:51.239339 | orchestrator | 2026-02-17 05:53:51.239345 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-17 05:53:51.239370 | orchestrator | Tuesday 17 February 2026 05:53:24 +0000 (0:00:02.464) 0:06:39.841 ****** 2026-02-17 05:53:51.239376 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-17 05:53:51.239381 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-17 05:53:51.239388 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-17 05:53:51.239394 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-17 05:53:51.239399 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-17 05:53:51.239405 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-17 05:53:51.239411 | orchestrator | 2026-02-17 05:53:51.239417 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-17 05:53:51.239427 | orchestrator | Tuesday 17 February 2026 05:53:37 +0000 (0:00:13.206) 0:06:53.048 ****** 2026-02-17 05:53:51.239433 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:53:51.239439 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:53:51.239444 | orchestrator | 2026-02-17 05:53:51.239450 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-17 05:53:51.239456 | orchestrator | Tuesday 17 February 2026 05:53:41 +0000 (0:00:03.950) 0:06:56.998 ****** 2026-02-17 05:53:51.239461 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:53:51.239467 | orchestrator | 2026-02-17 05:53:51.239472 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 05:53:51.239481 | orchestrator | Tuesday 17 February 2026 05:53:44 +0000 (0:00:02.474) 0:06:59.472 ****** 2026-02-17 05:53:51.239487 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-17 05:53:51.239493 | orchestrator | 2026-02-17 05:53:51.239499 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 05:53:51.239504 | orchestrator | Tuesday 17 February 2026 05:53:45 +0000 (0:00:01.524) 0:07:00.997 ****** 2026-02-17 05:53:51.239510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-17 05:53:51.239516 | orchestrator | 2026-02-17 05:53:51.239521 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 05:53:51.239527 | orchestrator | Tuesday 17 February 2026 05:53:47 +0000 (0:00:01.580) 0:07:02.578 ****** 2026-02-17 05:53:51.239533 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:53:51.239538 | orchestrator | 2026-02-17 05:53:51.239544 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 05:53:51.239549 | orchestrator | Tuesday 17 February 2026 05:53:48 +0000 (0:00:01.566) 0:07:04.144 ****** 2026-02-17 05:53:51.239555 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.239561 | orchestrator | 2026-02-17 05:53:51.239566 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 05:53:51.239572 | orchestrator | Tuesday 17 February 2026 05:53:50 +0000 (0:00:01.151) 0:07:05.296 ****** 2026-02-17 05:53:51.239578 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:53:51.239583 | orchestrator | 2026-02-17 05:53:51.239592 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 05:54:43.670135 | orchestrator | Tuesday 17 February 2026 05:53:51 +0000 (0:00:01.201) 0:07:06.498 ****** 2026-02-17 05:54:43.670253 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670270 | orchestrator | 2026-02-17 05:54:43.670283 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 05:54:43.670294 | orchestrator | Tuesday 17 February 2026 05:53:52 +0000 (0:00:01.133) 0:07:07.631 ****** 2026-02-17 05:54:43.670305 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.670317 | orchestrator | 2026-02-17 05:54:43.670328 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 05:54:43.670339 | orchestrator | Tuesday 17 February 2026 05:53:53 +0000 (0:00:01.584) 0:07:09.216 ****** 2026-02-17 05:54:43.670418 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670430 | orchestrator | 2026-02-17 05:54:43.670441 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 05:54:43.670476 | orchestrator | Tuesday 17 February 2026 05:53:55 +0000 (0:00:01.121) 0:07:10.338 ****** 2026-02-17 05:54:43.670488 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670499 | orchestrator | 2026-02-17 05:54:43.670510 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 05:54:43.670522 | orchestrator | Tuesday 17 February 2026 05:53:56 +0000 (0:00:01.181) 0:07:11.519 ****** 2026-02-17 05:54:43.670533 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.670544 | orchestrator | 2026-02-17 05:54:43.670555 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 05:54:43.670566 | orchestrator | Tuesday 17 February 2026 05:53:57 +0000 (0:00:01.541) 0:07:13.060 ****** 2026-02-17 05:54:43.670601 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.670613 | orchestrator | 2026-02-17 05:54:43.670624 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 05:54:43.670636 | orchestrator | Tuesday 17 February 2026 05:53:59 +0000 (0:00:01.581) 0:07:14.642 ****** 2026-02-17 05:54:43.670647 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670658 | orchestrator | 2026-02-17 05:54:43.670669 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 05:54:43.670680 | orchestrator | Tuesday 17 February 2026 05:54:00 +0000 (0:00:01.248) 0:07:15.890 ****** 2026-02-17 05:54:43.670691 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.670702 | orchestrator | 2026-02-17 05:54:43.670713 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 05:54:43.670724 | orchestrator | Tuesday 17 February 2026 05:54:01 +0000 (0:00:01.191) 0:07:17.081 ****** 2026-02-17 05:54:43.670734 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670745 | orchestrator | 2026-02-17 05:54:43.670756 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 05:54:43.670767 | orchestrator | Tuesday 17 February 2026 05:54:02 +0000 (0:00:01.160) 0:07:18.242 ****** 2026-02-17 05:54:43.670778 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670789 | orchestrator | 2026-02-17 05:54:43.670800 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 05:54:43.670811 | orchestrator | Tuesday 17 February 2026 05:54:04 +0000 (0:00:01.136) 0:07:19.379 ****** 2026-02-17 05:54:43.670822 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670832 | orchestrator | 2026-02-17 05:54:43.670844 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 05:54:43.670854 | orchestrator | Tuesday 17 February 2026 05:54:05 +0000 (0:00:01.138) 0:07:20.517 ****** 2026-02-17 05:54:43.670865 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670876 | orchestrator | 2026-02-17 05:54:43.670887 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 05:54:43.670898 | orchestrator | Tuesday 17 February 2026 05:54:06 +0000 (0:00:01.167) 0:07:21.684 ****** 2026-02-17 05:54:43.670909 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.670920 | orchestrator | 2026-02-17 05:54:43.670931 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 05:54:43.670942 | orchestrator | Tuesday 17 February 2026 05:54:07 +0000 (0:00:01.123) 0:07:22.808 ****** 2026-02-17 05:54:43.670953 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.670964 | orchestrator | 2026-02-17 05:54:43.670975 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 05:54:43.670986 | orchestrator | Tuesday 17 February 2026 05:54:08 +0000 (0:00:01.175) 0:07:23.984 ****** 2026-02-17 05:54:43.670997 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.671008 | orchestrator | 2026-02-17 05:54:43.671019 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 05:54:43.671044 | orchestrator | Tuesday 17 February 2026 05:54:09 +0000 (0:00:01.165) 0:07:25.149 ****** 2026-02-17 05:54:43.671055 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.671066 | orchestrator | 2026-02-17 05:54:43.671077 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 05:54:43.671088 | orchestrator | Tuesday 17 February 2026 05:54:11 +0000 (0:00:01.209) 0:07:26.359 ****** 2026-02-17 05:54:43.671099 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671110 | orchestrator | 2026-02-17 05:54:43.671121 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 05:54:43.671132 | orchestrator | Tuesday 17 February 2026 05:54:12 +0000 (0:00:01.146) 0:07:27.506 ****** 2026-02-17 05:54:43.671143 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671154 | orchestrator | 2026-02-17 05:54:43.671165 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 05:54:43.671176 | orchestrator | Tuesday 17 February 2026 05:54:13 +0000 (0:00:01.177) 0:07:28.684 ****** 2026-02-17 05:54:43.671233 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671245 | orchestrator | 2026-02-17 05:54:43.671256 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 05:54:43.671266 | orchestrator | Tuesday 17 February 2026 05:54:14 +0000 (0:00:01.177) 0:07:29.861 ****** 2026-02-17 05:54:43.671277 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671288 | orchestrator | 2026-02-17 05:54:43.671299 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 05:54:43.671310 | orchestrator | Tuesday 17 February 2026 05:54:15 +0000 (0:00:01.238) 0:07:31.100 ****** 2026-02-17 05:54:43.671337 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671372 | orchestrator | 2026-02-17 05:54:43.671383 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 05:54:43.671394 | orchestrator | Tuesday 17 February 2026 05:54:16 +0000 (0:00:01.117) 0:07:32.217 ****** 2026-02-17 05:54:43.671405 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671416 | orchestrator | 2026-02-17 05:54:43.671427 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 05:54:43.671437 | orchestrator | Tuesday 17 February 2026 05:54:18 +0000 (0:00:01.189) 0:07:33.407 ****** 2026-02-17 05:54:43.671448 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671459 | orchestrator | 2026-02-17 05:54:43.671470 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 05:54:43.671482 | orchestrator | Tuesday 17 February 2026 05:54:19 +0000 (0:00:01.132) 0:07:34.539 ****** 2026-02-17 05:54:43.671493 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671516 | orchestrator | 2026-02-17 05:54:43.671527 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 05:54:43.671538 | orchestrator | Tuesday 17 February 2026 05:54:20 +0000 (0:00:01.122) 0:07:35.661 ****** 2026-02-17 05:54:43.671549 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671559 | orchestrator | 2026-02-17 05:54:43.671571 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 05:54:43.671582 | orchestrator | Tuesday 17 February 2026 05:54:21 +0000 (0:00:01.172) 0:07:36.833 ****** 2026-02-17 05:54:43.671592 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671603 | orchestrator | 2026-02-17 05:54:43.671614 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 05:54:43.671625 | orchestrator | Tuesday 17 February 2026 05:54:22 +0000 (0:00:01.123) 0:07:37.957 ****** 2026-02-17 05:54:43.671636 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671646 | orchestrator | 2026-02-17 05:54:43.671657 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 05:54:43.671668 | orchestrator | Tuesday 17 February 2026 05:54:23 +0000 (0:00:01.124) 0:07:39.082 ****** 2026-02-17 05:54:43.671679 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671689 | orchestrator | 2026-02-17 05:54:43.671701 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 05:54:43.671711 | orchestrator | Tuesday 17 February 2026 05:54:24 +0000 (0:00:01.116) 0:07:40.198 ****** 2026-02-17 05:54:43.671722 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.671733 | orchestrator | 2026-02-17 05:54:43.671744 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 05:54:43.671755 | orchestrator | Tuesday 17 February 2026 05:54:26 +0000 (0:00:01.954) 0:07:42.153 ****** 2026-02-17 05:54:43.671765 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.671776 | orchestrator | 2026-02-17 05:54:43.671787 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 05:54:43.671797 | orchestrator | Tuesday 17 February 2026 05:54:29 +0000 (0:00:02.482) 0:07:44.635 ****** 2026-02-17 05:54:43.671808 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-17 05:54:43.671820 | orchestrator | 2026-02-17 05:54:43.671831 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 05:54:43.671850 | orchestrator | Tuesday 17 February 2026 05:54:30 +0000 (0:00:01.519) 0:07:46.155 ****** 2026-02-17 05:54:43.671860 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671871 | orchestrator | 2026-02-17 05:54:43.671882 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 05:54:43.671893 | orchestrator | Tuesday 17 February 2026 05:54:32 +0000 (0:00:01.151) 0:07:47.306 ****** 2026-02-17 05:54:43.671904 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.671914 | orchestrator | 2026-02-17 05:54:43.671926 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 05:54:43.671937 | orchestrator | Tuesday 17 February 2026 05:54:33 +0000 (0:00:01.167) 0:07:48.474 ****** 2026-02-17 05:54:43.671947 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 05:54:43.671958 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 05:54:43.671969 | orchestrator | 2026-02-17 05:54:43.671980 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 05:54:43.671991 | orchestrator | Tuesday 17 February 2026 05:54:35 +0000 (0:00:01.903) 0:07:50.377 ****** 2026-02-17 05:54:43.672008 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.672019 | orchestrator | 2026-02-17 05:54:43.672030 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 05:54:43.672041 | orchestrator | Tuesday 17 February 2026 05:54:36 +0000 (0:00:01.766) 0:07:52.143 ****** 2026-02-17 05:54:43.672052 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.672063 | orchestrator | 2026-02-17 05:54:43.672073 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 05:54:43.672084 | orchestrator | Tuesday 17 February 2026 05:54:38 +0000 (0:00:01.194) 0:07:53.338 ****** 2026-02-17 05:54:43.672095 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.672106 | orchestrator | 2026-02-17 05:54:43.672117 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 05:54:43.672128 | orchestrator | Tuesday 17 February 2026 05:54:39 +0000 (0:00:01.138) 0:07:54.476 ****** 2026-02-17 05:54:43.672138 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:54:43.672149 | orchestrator | 2026-02-17 05:54:43.672160 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 05:54:43.672171 | orchestrator | Tuesday 17 February 2026 05:54:40 +0000 (0:00:01.147) 0:07:55.624 ****** 2026-02-17 05:54:43.672182 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-17 05:54:43.672192 | orchestrator | 2026-02-17 05:54:43.672225 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 05:54:43.672236 | orchestrator | Tuesday 17 February 2026 05:54:41 +0000 (0:00:01.491) 0:07:57.116 ****** 2026-02-17 05:54:43.672247 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:54:43.672258 | orchestrator | 2026-02-17 05:54:43.672275 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 05:55:31.302956 | orchestrator | Tuesday 17 February 2026 05:54:43 +0000 (0:00:01.812) 0:07:58.928 ****** 2026-02-17 05:55:31.303070 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 05:55:31.303088 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 05:55:31.303104 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 05:55:31.303118 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303133 | orchestrator | 2026-02-17 05:55:31.303148 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 05:55:31.303159 | orchestrator | Tuesday 17 February 2026 05:54:44 +0000 (0:00:01.165) 0:08:00.094 ****** 2026-02-17 05:55:31.303171 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303183 | orchestrator | 2026-02-17 05:55:31.303194 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 05:55:31.303229 | orchestrator | Tuesday 17 February 2026 05:54:45 +0000 (0:00:01.102) 0:08:01.196 ****** 2026-02-17 05:55:31.303250 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303269 | orchestrator | 2026-02-17 05:55:31.303286 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 05:55:31.303304 | orchestrator | Tuesday 17 February 2026 05:54:47 +0000 (0:00:01.215) 0:08:02.412 ****** 2026-02-17 05:55:31.303323 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303374 | orchestrator | 2026-02-17 05:55:31.303389 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 05:55:31.303401 | orchestrator | Tuesday 17 February 2026 05:54:48 +0000 (0:00:01.160) 0:08:03.572 ****** 2026-02-17 05:55:31.303412 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303423 | orchestrator | 2026-02-17 05:55:31.303434 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 05:55:31.303445 | orchestrator | Tuesday 17 February 2026 05:54:49 +0000 (0:00:01.168) 0:08:04.741 ****** 2026-02-17 05:55:31.303455 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303466 | orchestrator | 2026-02-17 05:55:31.303478 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 05:55:31.303488 | orchestrator | Tuesday 17 February 2026 05:54:50 +0000 (0:00:01.151) 0:08:05.892 ****** 2026-02-17 05:55:31.303499 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:55:31.303511 | orchestrator | 2026-02-17 05:55:31.303522 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 05:55:31.303533 | orchestrator | Tuesday 17 February 2026 05:54:53 +0000 (0:00:02.598) 0:08:08.491 ****** 2026-02-17 05:55:31.303543 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:55:31.303554 | orchestrator | 2026-02-17 05:55:31.303565 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 05:55:31.303576 | orchestrator | Tuesday 17 February 2026 05:54:54 +0000 (0:00:01.137) 0:08:09.628 ****** 2026-02-17 05:55:31.303587 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-17 05:55:31.303598 | orchestrator | 2026-02-17 05:55:31.303608 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 05:55:31.303619 | orchestrator | Tuesday 17 February 2026 05:54:55 +0000 (0:00:01.558) 0:08:11.187 ****** 2026-02-17 05:55:31.303630 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303641 | orchestrator | 2026-02-17 05:55:31.303651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 05:55:31.303662 | orchestrator | Tuesday 17 February 2026 05:54:57 +0000 (0:00:01.154) 0:08:12.342 ****** 2026-02-17 05:55:31.303673 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303684 | orchestrator | 2026-02-17 05:55:31.303694 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 05:55:31.303705 | orchestrator | Tuesday 17 February 2026 05:54:58 +0000 (0:00:01.171) 0:08:13.514 ****** 2026-02-17 05:55:31.303716 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303727 | orchestrator | 2026-02-17 05:55:31.303738 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 05:55:31.303749 | orchestrator | Tuesday 17 February 2026 05:54:59 +0000 (0:00:01.151) 0:08:14.665 ****** 2026-02-17 05:55:31.303760 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303770 | orchestrator | 2026-02-17 05:55:31.303781 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 05:55:31.303801 | orchestrator | Tuesday 17 February 2026 05:55:00 +0000 (0:00:01.194) 0:08:15.860 ****** 2026-02-17 05:55:31.303812 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303824 | orchestrator | 2026-02-17 05:55:31.303835 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 05:55:31.303846 | orchestrator | Tuesday 17 February 2026 05:55:01 +0000 (0:00:01.156) 0:08:17.016 ****** 2026-02-17 05:55:31.303856 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303867 | orchestrator | 2026-02-17 05:55:31.303878 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 05:55:31.303899 | orchestrator | Tuesday 17 February 2026 05:55:02 +0000 (0:00:01.184) 0:08:18.201 ****** 2026-02-17 05:55:31.303910 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303921 | orchestrator | 2026-02-17 05:55:31.303932 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 05:55:31.303943 | orchestrator | Tuesday 17 February 2026 05:55:04 +0000 (0:00:01.151) 0:08:19.352 ****** 2026-02-17 05:55:31.303953 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.303964 | orchestrator | 2026-02-17 05:55:31.303975 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 05:55:31.303986 | orchestrator | Tuesday 17 February 2026 05:55:05 +0000 (0:00:01.143) 0:08:20.496 ****** 2026-02-17 05:55:31.303997 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:55:31.304008 | orchestrator | 2026-02-17 05:55:31.304019 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 05:55:31.304030 | orchestrator | Tuesday 17 February 2026 05:55:06 +0000 (0:00:01.137) 0:08:21.634 ****** 2026-02-17 05:55:31.304041 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-17 05:55:31.304053 | orchestrator | 2026-02-17 05:55:31.304081 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 05:55:31.304093 | orchestrator | Tuesday 17 February 2026 05:55:07 +0000 (0:00:01.546) 0:08:23.181 ****** 2026-02-17 05:55:31.304103 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-17 05:55:31.304115 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-17 05:55:31.304125 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-17 05:55:31.304136 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-17 05:55:31.304147 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-17 05:55:31.304158 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-17 05:55:31.304169 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-17 05:55:31.304179 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-17 05:55:31.304191 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 05:55:31.304202 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 05:55:31.304213 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 05:55:31.304224 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 05:55:31.304234 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 05:55:31.304245 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 05:55:31.304256 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-17 05:55:31.304267 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-17 05:55:31.304278 | orchestrator | 2026-02-17 05:55:31.304289 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 05:55:31.304300 | orchestrator | Tuesday 17 February 2026 05:55:14 +0000 (0:00:07.089) 0:08:30.271 ****** 2026-02-17 05:55:31.304310 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304321 | orchestrator | 2026-02-17 05:55:31.304332 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 05:55:31.304372 | orchestrator | Tuesday 17 February 2026 05:55:16 +0000 (0:00:01.162) 0:08:31.433 ****** 2026-02-17 05:55:31.304384 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304395 | orchestrator | 2026-02-17 05:55:31.304406 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 05:55:31.304417 | orchestrator | Tuesday 17 February 2026 05:55:17 +0000 (0:00:01.124) 0:08:32.558 ****** 2026-02-17 05:55:31.304428 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304439 | orchestrator | 2026-02-17 05:55:31.304450 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 05:55:31.304471 | orchestrator | Tuesday 17 February 2026 05:55:18 +0000 (0:00:01.146) 0:08:33.704 ****** 2026-02-17 05:55:31.304482 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304493 | orchestrator | 2026-02-17 05:55:31.304504 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 05:55:31.304515 | orchestrator | Tuesday 17 February 2026 05:55:19 +0000 (0:00:01.133) 0:08:34.837 ****** 2026-02-17 05:55:31.304525 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304536 | orchestrator | 2026-02-17 05:55:31.304547 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 05:55:31.304558 | orchestrator | Tuesday 17 February 2026 05:55:20 +0000 (0:00:01.182) 0:08:36.020 ****** 2026-02-17 05:55:31.304569 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304580 | orchestrator | 2026-02-17 05:55:31.304591 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 05:55:31.304602 | orchestrator | Tuesday 17 February 2026 05:55:21 +0000 (0:00:01.128) 0:08:37.149 ****** 2026-02-17 05:55:31.304613 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304624 | orchestrator | 2026-02-17 05:55:31.304635 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 05:55:31.304646 | orchestrator | Tuesday 17 February 2026 05:55:23 +0000 (0:00:01.135) 0:08:38.285 ****** 2026-02-17 05:55:31.304657 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304668 | orchestrator | 2026-02-17 05:55:31.304683 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 05:55:31.304695 | orchestrator | Tuesday 17 February 2026 05:55:24 +0000 (0:00:01.140) 0:08:39.425 ****** 2026-02-17 05:55:31.304706 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304717 | orchestrator | 2026-02-17 05:55:31.304727 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 05:55:31.304738 | orchestrator | Tuesday 17 February 2026 05:55:25 +0000 (0:00:01.286) 0:08:40.712 ****** 2026-02-17 05:55:31.304749 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304760 | orchestrator | 2026-02-17 05:55:31.304771 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 05:55:31.304782 | orchestrator | Tuesday 17 February 2026 05:55:26 +0000 (0:00:01.101) 0:08:41.813 ****** 2026-02-17 05:55:31.304793 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304804 | orchestrator | 2026-02-17 05:55:31.304830 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 05:55:31.304852 | orchestrator | Tuesday 17 February 2026 05:55:27 +0000 (0:00:01.158) 0:08:42.972 ****** 2026-02-17 05:55:31.304864 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304875 | orchestrator | 2026-02-17 05:55:31.304886 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 05:55:31.304897 | orchestrator | Tuesday 17 February 2026 05:55:28 +0000 (0:00:01.174) 0:08:44.146 ****** 2026-02-17 05:55:31.304908 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304919 | orchestrator | 2026-02-17 05:55:31.304930 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 05:55:31.304941 | orchestrator | Tuesday 17 February 2026 05:55:30 +0000 (0:00:01.265) 0:08:45.412 ****** 2026-02-17 05:55:31.304952 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:55:31.304963 | orchestrator | 2026-02-17 05:55:31.304980 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 05:56:26.600838 | orchestrator | Tuesday 17 February 2026 05:55:31 +0000 (0:00:01.141) 0:08:46.553 ****** 2026-02-17 05:56:26.600953 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.600969 | orchestrator | 2026-02-17 05:56:26.600982 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 05:56:26.600994 | orchestrator | Tuesday 17 February 2026 05:55:32 +0000 (0:00:01.223) 0:08:47.777 ****** 2026-02-17 05:56:26.601005 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601016 | orchestrator | 2026-02-17 05:56:26.601052 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 05:56:26.601064 | orchestrator | Tuesday 17 February 2026 05:55:33 +0000 (0:00:01.131) 0:08:48.908 ****** 2026-02-17 05:56:26.601075 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601085 | orchestrator | 2026-02-17 05:56:26.601097 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 05:56:26.601109 | orchestrator | Tuesday 17 February 2026 05:55:34 +0000 (0:00:01.143) 0:08:50.052 ****** 2026-02-17 05:56:26.601120 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601131 | orchestrator | 2026-02-17 05:56:26.601142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 05:56:26.601153 | orchestrator | Tuesday 17 February 2026 05:55:35 +0000 (0:00:01.151) 0:08:51.203 ****** 2026-02-17 05:56:26.601164 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601175 | orchestrator | 2026-02-17 05:56:26.601186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 05:56:26.601197 | orchestrator | Tuesday 17 February 2026 05:55:37 +0000 (0:00:01.115) 0:08:52.319 ****** 2026-02-17 05:56:26.601207 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601218 | orchestrator | 2026-02-17 05:56:26.601229 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 05:56:26.601240 | orchestrator | Tuesday 17 February 2026 05:55:38 +0000 (0:00:01.176) 0:08:53.495 ****** 2026-02-17 05:56:26.601251 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601261 | orchestrator | 2026-02-17 05:56:26.601272 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 05:56:26.601283 | orchestrator | Tuesday 17 February 2026 05:55:39 +0000 (0:00:01.131) 0:08:54.626 ****** 2026-02-17 05:56:26.601294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 05:56:26.601306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 05:56:26.601317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 05:56:26.601328 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601338 | orchestrator | 2026-02-17 05:56:26.601349 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 05:56:26.601360 | orchestrator | Tuesday 17 February 2026 05:55:41 +0000 (0:00:01.805) 0:08:56.432 ****** 2026-02-17 05:56:26.601371 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 05:56:26.601448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 05:56:26.601461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 05:56:26.601475 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601488 | orchestrator | 2026-02-17 05:56:26.601501 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 05:56:26.601514 | orchestrator | Tuesday 17 February 2026 05:55:42 +0000 (0:00:01.420) 0:08:57.853 ****** 2026-02-17 05:56:26.601527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 05:56:26.601539 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 05:56:26.601550 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 05:56:26.601563 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601575 | orchestrator | 2026-02-17 05:56:26.601587 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 05:56:26.601599 | orchestrator | Tuesday 17 February 2026 05:55:44 +0000 (0:00:01.469) 0:08:59.323 ****** 2026-02-17 05:56:26.601612 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601625 | orchestrator | 2026-02-17 05:56:26.601637 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 05:56:26.601664 | orchestrator | Tuesday 17 February 2026 05:55:45 +0000 (0:00:01.125) 0:09:00.448 ****** 2026-02-17 05:56:26.601677 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-17 05:56:26.601687 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601707 | orchestrator | 2026-02-17 05:56:26.601718 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 05:56:26.601729 | orchestrator | Tuesday 17 February 2026 05:55:46 +0000 (0:00:01.368) 0:09:01.817 ****** 2026-02-17 05:56:26.601740 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.601751 | orchestrator | 2026-02-17 05:56:26.601762 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-17 05:56:26.601773 | orchestrator | Tuesday 17 February 2026 05:55:48 +0000 (0:00:01.776) 0:09:03.594 ****** 2026-02-17 05:56:26.601783 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.601794 | orchestrator | 2026-02-17 05:56:26.601805 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-17 05:56:26.601816 | orchestrator | Tuesday 17 February 2026 05:55:49 +0000 (0:00:01.145) 0:09:04.739 ****** 2026-02-17 05:56:26.601827 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-17 05:56:26.601838 | orchestrator | 2026-02-17 05:56:26.601849 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-17 05:56:26.601860 | orchestrator | Tuesday 17 February 2026 05:55:50 +0000 (0:00:01.515) 0:09:06.255 ****** 2026-02-17 05:56:26.601871 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-17 05:56:26.601882 | orchestrator | 2026-02-17 05:56:26.601893 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-17 05:56:26.601904 | orchestrator | Tuesday 17 February 2026 05:55:54 +0000 (0:00:03.458) 0:09:09.713 ****** 2026-02-17 05:56:26.601915 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.601926 | orchestrator | 2026-02-17 05:56:26.601954 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-17 05:56:26.601966 | orchestrator | Tuesday 17 February 2026 05:55:55 +0000 (0:00:01.172) 0:09:10.885 ****** 2026-02-17 05:56:26.601977 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.601987 | orchestrator | 2026-02-17 05:56:26.601998 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-17 05:56:26.602009 | orchestrator | Tuesday 17 February 2026 05:55:56 +0000 (0:00:01.162) 0:09:12.048 ****** 2026-02-17 05:56:26.602078 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602090 | orchestrator | 2026-02-17 05:56:26.602101 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-17 05:56:26.602112 | orchestrator | Tuesday 17 February 2026 05:55:57 +0000 (0:00:01.151) 0:09:13.200 ****** 2026-02-17 05:56:26.602123 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:56:26.602134 | orchestrator | 2026-02-17 05:56:26.602144 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-17 05:56:26.602164 | orchestrator | Tuesday 17 February 2026 05:56:00 +0000 (0:00:02.104) 0:09:15.304 ****** 2026-02-17 05:56:26.602176 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602186 | orchestrator | 2026-02-17 05:56:26.602197 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-17 05:56:26.602208 | orchestrator | Tuesday 17 February 2026 05:56:01 +0000 (0:00:01.613) 0:09:16.918 ****** 2026-02-17 05:56:26.602219 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602230 | orchestrator | 2026-02-17 05:56:26.602241 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-17 05:56:26.602252 | orchestrator | Tuesday 17 February 2026 05:56:03 +0000 (0:00:01.524) 0:09:18.442 ****** 2026-02-17 05:56:26.602262 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602273 | orchestrator | 2026-02-17 05:56:26.602284 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-17 05:56:26.602295 | orchestrator | Tuesday 17 February 2026 05:56:04 +0000 (0:00:01.614) 0:09:20.056 ****** 2026-02-17 05:56:26.602305 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602316 | orchestrator | 2026-02-17 05:56:26.602327 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-17 05:56:26.602338 | orchestrator | Tuesday 17 February 2026 05:56:06 +0000 (0:00:01.723) 0:09:21.780 ****** 2026-02-17 05:56:26.602357 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602368 | orchestrator | 2026-02-17 05:56:26.602400 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-17 05:56:26.602411 | orchestrator | Tuesday 17 February 2026 05:56:08 +0000 (0:00:01.783) 0:09:23.564 ****** 2026-02-17 05:56:26.602422 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-17 05:56:26.602433 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 05:56:26.602444 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 05:56:26.602454 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-17 05:56:26.602465 | orchestrator | 2026-02-17 05:56:26.602476 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-17 05:56:26.602487 | orchestrator | Tuesday 17 February 2026 05:56:12 +0000 (0:00:03.867) 0:09:27.432 ****** 2026-02-17 05:56:26.602498 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:56:26.602509 | orchestrator | 2026-02-17 05:56:26.602520 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-17 05:56:26.602531 | orchestrator | Tuesday 17 February 2026 05:56:14 +0000 (0:00:02.072) 0:09:29.504 ****** 2026-02-17 05:56:26.602541 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602552 | orchestrator | 2026-02-17 05:56:26.602563 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-17 05:56:26.602574 | orchestrator | Tuesday 17 February 2026 05:56:15 +0000 (0:00:01.163) 0:09:30.667 ****** 2026-02-17 05:56:26.602585 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602595 | orchestrator | 2026-02-17 05:56:26.602606 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-17 05:56:26.602617 | orchestrator | Tuesday 17 February 2026 05:56:16 +0000 (0:00:01.128) 0:09:31.796 ****** 2026-02-17 05:56:26.602628 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602639 | orchestrator | 2026-02-17 05:56:26.602650 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-17 05:56:26.602666 | orchestrator | Tuesday 17 February 2026 05:56:18 +0000 (0:00:02.174) 0:09:33.970 ****** 2026-02-17 05:56:26.602677 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:56:26.602688 | orchestrator | 2026-02-17 05:56:26.602699 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-17 05:56:26.602713 | orchestrator | Tuesday 17 February 2026 05:56:20 +0000 (0:00:01.470) 0:09:35.441 ****** 2026-02-17 05:56:26.602730 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.602748 | orchestrator | 2026-02-17 05:56:26.602764 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-17 05:56:26.602782 | orchestrator | Tuesday 17 February 2026 05:56:21 +0000 (0:00:01.149) 0:09:36.590 ****** 2026-02-17 05:56:26.602801 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-17 05:56:26.602818 | orchestrator | 2026-02-17 05:56:26.602833 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-17 05:56:26.602844 | orchestrator | Tuesday 17 February 2026 05:56:22 +0000 (0:00:01.469) 0:09:38.060 ****** 2026-02-17 05:56:26.602855 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.602865 | orchestrator | 2026-02-17 05:56:26.602876 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-17 05:56:26.602887 | orchestrator | Tuesday 17 February 2026 05:56:23 +0000 (0:00:01.159) 0:09:39.220 ****** 2026-02-17 05:56:26.602898 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:56:26.602909 | orchestrator | 2026-02-17 05:56:26.602919 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-17 05:56:26.602930 | orchestrator | Tuesday 17 February 2026 05:56:25 +0000 (0:00:01.126) 0:09:40.346 ****** 2026-02-17 05:56:26.602941 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-17 05:56:26.602952 | orchestrator | 2026-02-17 05:56:26.602972 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-17 05:57:18.875009 | orchestrator | Tuesday 17 February 2026 05:56:26 +0000 (0:00:01.513) 0:09:41.860 ****** 2026-02-17 05:57:18.875165 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:57:18.875181 | orchestrator | 2026-02-17 05:57:18.875202 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-17 05:57:18.875248 | orchestrator | Tuesday 17 February 2026 05:56:28 +0000 (0:00:02.326) 0:09:44.187 ****** 2026-02-17 05:57:18.875258 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:57:18.875267 | orchestrator | 2026-02-17 05:57:18.875275 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-17 05:57:18.875287 | orchestrator | Tuesday 17 February 2026 05:56:30 +0000 (0:00:02.029) 0:09:46.217 ****** 2026-02-17 05:57:18.875301 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:57:18.875313 | orchestrator | 2026-02-17 05:57:18.875326 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-17 05:57:18.875339 | orchestrator | Tuesday 17 February 2026 05:56:33 +0000 (0:00:02.506) 0:09:48.723 ****** 2026-02-17 05:57:18.875353 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:57:18.875367 | orchestrator | 2026-02-17 05:57:18.875381 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-17 05:57:18.875392 | orchestrator | Tuesday 17 February 2026 05:56:36 +0000 (0:00:03.248) 0:09:51.971 ****** 2026-02-17 05:57:18.875400 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-17 05:57:18.875409 | orchestrator | 2026-02-17 05:57:18.875417 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-17 05:57:18.875426 | orchestrator | Tuesday 17 February 2026 05:56:38 +0000 (0:00:01.634) 0:09:53.605 ****** 2026-02-17 05:57:18.875434 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:57:18.875442 | orchestrator | 2026-02-17 05:57:18.875449 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-17 05:57:18.875458 | orchestrator | Tuesday 17 February 2026 05:56:40 +0000 (0:00:02.253) 0:09:55.859 ****** 2026-02-17 05:57:18.875466 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:57:18.875474 | orchestrator | 2026-02-17 05:57:18.875501 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-17 05:57:18.875510 | orchestrator | Tuesday 17 February 2026 05:56:43 +0000 (0:00:02.994) 0:09:58.853 ****** 2026-02-17 05:57:18.875518 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:57:18.875526 | orchestrator | 2026-02-17 05:57:18.875534 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-17 05:57:18.875542 | orchestrator | Tuesday 17 February 2026 05:56:44 +0000 (0:00:01.143) 0:09:59.997 ****** 2026-02-17 05:57:18.875552 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-17 05:57:18.875564 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-17 05:57:18.875586 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-17 05:57:18.875596 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-17 05:57:18.875615 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-17 05:57:18.875626 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}])  2026-02-17 05:57:18.875637 | orchestrator | 2026-02-17 05:57:18.875661 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-17 05:57:18.875676 | orchestrator | Tuesday 17 February 2026 05:56:54 +0000 (0:00:09.623) 0:10:09.620 ****** 2026-02-17 05:57:18.875690 | orchestrator | changed: [testbed-node-0] 2026-02-17 05:57:18.875703 | orchestrator | 2026-02-17 05:57:18.875718 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 05:57:18.875732 | orchestrator | Tuesday 17 February 2026 05:56:56 +0000 (0:00:02.496) 0:10:12.117 ****** 2026-02-17 05:57:18.875746 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 05:57:18.875759 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 05:57:18.875773 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 05:57:18.875785 | orchestrator | 2026-02-17 05:57:18.875798 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 05:57:18.875811 | orchestrator | Tuesday 17 February 2026 05:56:59 +0000 (0:00:02.215) 0:10:14.332 ****** 2026-02-17 05:57:18.875826 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 05:57:18.875839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 05:57:18.875853 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 05:57:18.875865 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:57:18.875879 | orchestrator | 2026-02-17 05:57:18.875891 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-17 05:57:18.875903 | orchestrator | Tuesday 17 February 2026 05:57:00 +0000 (0:00:01.493) 0:10:15.826 ****** 2026-02-17 05:57:18.875915 | orchestrator | skipping: [testbed-node-0] 2026-02-17 05:57:18.875928 | orchestrator | 2026-02-17 05:57:18.875941 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-17 05:57:18.875955 | orchestrator | Tuesday 17 February 2026 05:57:01 +0000 (0:00:01.165) 0:10:16.992 ****** 2026-02-17 05:57:18.875967 | orchestrator | ok: [testbed-node-0] 2026-02-17 05:57:18.875980 | orchestrator | 2026-02-17 05:57:18.875992 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-17 05:57:18.876006 | orchestrator | 2026-02-17 05:57:18.876019 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-17 05:57:18.876032 | orchestrator | Tuesday 17 February 2026 05:57:04 +0000 (0:00:02.401) 0:10:19.393 ****** 2026-02-17 05:57:18.876046 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876060 | orchestrator | 2026-02-17 05:57:18.876073 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-17 05:57:18.876087 | orchestrator | Tuesday 17 February 2026 05:57:05 +0000 (0:00:01.167) 0:10:20.561 ****** 2026-02-17 05:57:18.876100 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876113 | orchestrator | 2026-02-17 05:57:18.876126 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-17 05:57:18.876139 | orchestrator | Tuesday 17 February 2026 05:57:06 +0000 (0:00:00.782) 0:10:21.343 ****** 2026-02-17 05:57:18.876162 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:18.876175 | orchestrator | 2026-02-17 05:57:18.876188 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-17 05:57:18.876201 | orchestrator | Tuesday 17 February 2026 05:57:06 +0000 (0:00:00.776) 0:10:22.120 ****** 2026-02-17 05:57:18.876215 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876228 | orchestrator | 2026-02-17 05:57:18.876241 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 05:57:18.876255 | orchestrator | Tuesday 17 February 2026 05:57:07 +0000 (0:00:00.814) 0:10:22.935 ****** 2026-02-17 05:57:18.876268 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-17 05:57:18.876280 | orchestrator | 2026-02-17 05:57:18.876293 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 05:57:18.876306 | orchestrator | Tuesday 17 February 2026 05:57:08 +0000 (0:00:01.181) 0:10:24.117 ****** 2026-02-17 05:57:18.876318 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876330 | orchestrator | 2026-02-17 05:57:18.876343 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 05:57:18.876356 | orchestrator | Tuesday 17 February 2026 05:57:10 +0000 (0:00:01.503) 0:10:25.621 ****** 2026-02-17 05:57:18.876369 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876382 | orchestrator | 2026-02-17 05:57:18.876403 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 05:57:18.876417 | orchestrator | Tuesday 17 February 2026 05:57:11 +0000 (0:00:01.183) 0:10:26.804 ****** 2026-02-17 05:57:18.876430 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876444 | orchestrator | 2026-02-17 05:57:18.876457 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 05:57:18.876469 | orchestrator | Tuesday 17 February 2026 05:57:13 +0000 (0:00:01.487) 0:10:28.291 ****** 2026-02-17 05:57:18.876505 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876520 | orchestrator | 2026-02-17 05:57:18.876533 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 05:57:18.876546 | orchestrator | Tuesday 17 February 2026 05:57:14 +0000 (0:00:01.124) 0:10:29.416 ****** 2026-02-17 05:57:18.876559 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876572 | orchestrator | 2026-02-17 05:57:18.876585 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 05:57:18.876598 | orchestrator | Tuesday 17 February 2026 05:57:15 +0000 (0:00:01.158) 0:10:30.574 ****** 2026-02-17 05:57:18.876611 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876624 | orchestrator | 2026-02-17 05:57:18.876637 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 05:57:18.876649 | orchestrator | Tuesday 17 February 2026 05:57:16 +0000 (0:00:01.190) 0:10:31.764 ****** 2026-02-17 05:57:18.876662 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:18.876675 | orchestrator | 2026-02-17 05:57:18.876688 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 05:57:18.876701 | orchestrator | Tuesday 17 February 2026 05:57:17 +0000 (0:00:01.171) 0:10:32.936 ****** 2026-02-17 05:57:18.876714 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:18.876729 | orchestrator | 2026-02-17 05:57:18.876742 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 05:57:18.876768 | orchestrator | Tuesday 17 February 2026 05:57:18 +0000 (0:00:01.197) 0:10:34.133 ****** 2026-02-17 05:57:44.703439 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 05:57:44.703522 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 05:57:44.703570 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:57:44.703575 | orchestrator | 2026-02-17 05:57:44.703580 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 05:57:44.703586 | orchestrator | Tuesday 17 February 2026 05:57:20 +0000 (0:00:01.874) 0:10:36.008 ****** 2026-02-17 05:57:44.703590 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:44.703619 | orchestrator | 2026-02-17 05:57:44.703628 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 05:57:44.703635 | orchestrator | Tuesday 17 February 2026 05:57:22 +0000 (0:00:01.276) 0:10:37.285 ****** 2026-02-17 05:57:44.703642 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 05:57:44.703650 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 05:57:44.703658 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:57:44.703665 | orchestrator | 2026-02-17 05:57:44.703673 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 05:57:44.703681 | orchestrator | Tuesday 17 February 2026 05:57:24 +0000 (0:00:02.867) 0:10:40.152 ****** 2026-02-17 05:57:44.703688 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 05:57:44.703696 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 05:57:44.703704 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 05:57:44.703711 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.703719 | orchestrator | 2026-02-17 05:57:44.703726 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 05:57:44.703734 | orchestrator | Tuesday 17 February 2026 05:57:26 +0000 (0:00:01.447) 0:10:41.599 ****** 2026-02-17 05:57:44.703743 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 05:57:44.703754 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 05:57:44.703762 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 05:57:44.703770 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.703777 | orchestrator | 2026-02-17 05:57:44.703784 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 05:57:44.703792 | orchestrator | Tuesday 17 February 2026 05:57:27 +0000 (0:00:01.642) 0:10:43.241 ****** 2026-02-17 05:57:44.703801 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:44.703823 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:44.703831 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:44.703839 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.703846 | orchestrator | 2026-02-17 05:57:44.703854 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 05:57:44.703868 | orchestrator | Tuesday 17 February 2026 05:57:29 +0000 (0:00:01.189) 0:10:44.430 ****** 2026-02-17 05:57:44.703891 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 05:57:22.536910', 'end': '2026-02-17 05:57:22.570857', 'delta': '0:00:00.033947', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 05:57:44.703902 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '5939893342f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 05:57:23.098698', 'end': '2026-02-17 05:57:23.152365', 'delta': '0:00:00.053667', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5939893342f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 05:57:44.703910 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '4f72f9ce519e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 05:57:23.663901', 'end': '2026-02-17 05:57:23.700933', 'delta': '0:00:00.037032', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f72f9ce519e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 05:57:44.703917 | orchestrator | 2026-02-17 05:57:44.703925 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 05:57:44.703932 | orchestrator | Tuesday 17 February 2026 05:57:30 +0000 (0:00:01.444) 0:10:45.874 ****** 2026-02-17 05:57:44.703940 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:44.703947 | orchestrator | 2026-02-17 05:57:44.703955 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 05:57:44.703962 | orchestrator | Tuesday 17 February 2026 05:57:31 +0000 (0:00:01.294) 0:10:47.169 ****** 2026-02-17 05:57:44.703969 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.703977 | orchestrator | 2026-02-17 05:57:44.703984 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 05:57:44.703991 | orchestrator | Tuesday 17 February 2026 05:57:33 +0000 (0:00:01.332) 0:10:48.502 ****** 2026-02-17 05:57:44.703999 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:44.704006 | orchestrator | 2026-02-17 05:57:44.704013 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 05:57:44.704022 | orchestrator | Tuesday 17 February 2026 05:57:34 +0000 (0:00:01.156) 0:10:49.658 ****** 2026-02-17 05:57:44.704030 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-17 05:57:44.704038 | orchestrator | 2026-02-17 05:57:44.704052 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 05:57:44.704060 | orchestrator | Tuesday 17 February 2026 05:57:37 +0000 (0:00:03.102) 0:10:52.761 ****** 2026-02-17 05:57:44.704069 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:44.704082 | orchestrator | 2026-02-17 05:57:44.704091 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 05:57:44.704099 | orchestrator | Tuesday 17 February 2026 05:57:38 +0000 (0:00:01.199) 0:10:53.961 ****** 2026-02-17 05:57:44.704108 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.704116 | orchestrator | 2026-02-17 05:57:44.704124 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 05:57:44.704133 | orchestrator | Tuesday 17 February 2026 05:57:39 +0000 (0:00:01.164) 0:10:55.125 ****** 2026-02-17 05:57:44.704141 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.704149 | orchestrator | 2026-02-17 05:57:44.704157 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 05:57:44.704166 | orchestrator | Tuesday 17 February 2026 05:57:41 +0000 (0:00:01.283) 0:10:56.408 ****** 2026-02-17 05:57:44.704174 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.704182 | orchestrator | 2026-02-17 05:57:44.704191 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 05:57:44.704199 | orchestrator | Tuesday 17 February 2026 05:57:42 +0000 (0:00:01.176) 0:10:57.584 ****** 2026-02-17 05:57:44.704207 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.704216 | orchestrator | 2026-02-17 05:57:44.704224 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 05:57:44.704232 | orchestrator | Tuesday 17 February 2026 05:57:43 +0000 (0:00:01.207) 0:10:58.792 ****** 2026-02-17 05:57:44.704240 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:44.704249 | orchestrator | 2026-02-17 05:57:44.704257 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 05:57:44.704270 | orchestrator | Tuesday 17 February 2026 05:57:44 +0000 (0:00:01.163) 0:10:59.956 ****** 2026-02-17 05:57:51.736760 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:51.736834 | orchestrator | 2026-02-17 05:57:51.736841 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 05:57:51.736847 | orchestrator | Tuesday 17 February 2026 05:57:45 +0000 (0:00:01.124) 0:11:01.081 ****** 2026-02-17 05:57:51.736851 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:51.736855 | orchestrator | 2026-02-17 05:57:51.736860 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 05:57:51.736864 | orchestrator | Tuesday 17 February 2026 05:57:46 +0000 (0:00:01.143) 0:11:02.225 ****** 2026-02-17 05:57:51.736868 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:51.736872 | orchestrator | 2026-02-17 05:57:51.736876 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 05:57:51.736881 | orchestrator | Tuesday 17 February 2026 05:57:48 +0000 (0:00:01.148) 0:11:03.373 ****** 2026-02-17 05:57:51.736885 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:51.736888 | orchestrator | 2026-02-17 05:57:51.736892 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 05:57:51.736896 | orchestrator | Tuesday 17 February 2026 05:57:49 +0000 (0:00:01.101) 0:11:04.474 ****** 2026-02-17 05:57:51.736902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:57:51.736908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:57:51.736912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:57:51.736931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 05:57:51.736948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:57:51.736952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:57:51.736956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:57:51.736974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd83a89d3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 05:57:51.736983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:57:51.736990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 05:57:51.736994 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:51.736998 | orchestrator | 2026-02-17 05:57:51.737002 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 05:57:51.737006 | orchestrator | Tuesday 17 February 2026 05:57:50 +0000 (0:00:01.214) 0:11:05.689 ****** 2026-02-17 05:57:51.737010 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:51.737019 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.535751 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.535863 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.535903 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.535930 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.535942 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.535978 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd83a89d3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.536001 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.536019 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 05:57:59.536032 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:59.536045 | orchestrator | 2026-02-17 05:57:59.536057 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 05:57:59.536070 | orchestrator | Tuesday 17 February 2026 05:57:51 +0000 (0:00:01.311) 0:11:07.001 ****** 2026-02-17 05:57:59.536081 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:59.536092 | orchestrator | 2026-02-17 05:57:59.536104 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 05:57:59.536115 | orchestrator | Tuesday 17 February 2026 05:57:53 +0000 (0:00:01.494) 0:11:08.495 ****** 2026-02-17 05:57:59.536126 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:59.536137 | orchestrator | 2026-02-17 05:57:59.536148 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 05:57:59.536159 | orchestrator | Tuesday 17 February 2026 05:57:54 +0000 (0:00:01.152) 0:11:09.648 ****** 2026-02-17 05:57:59.536171 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:57:59.536184 | orchestrator | 2026-02-17 05:57:59.536196 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 05:57:59.536209 | orchestrator | Tuesday 17 February 2026 05:57:55 +0000 (0:00:01.592) 0:11:11.241 ****** 2026-02-17 05:57:59.536222 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:59.536235 | orchestrator | 2026-02-17 05:57:59.536248 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 05:57:59.536260 | orchestrator | Tuesday 17 February 2026 05:57:57 +0000 (0:00:01.114) 0:11:12.356 ****** 2026-02-17 05:57:59.536273 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:59.536286 | orchestrator | 2026-02-17 05:57:59.536298 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 05:57:59.536311 | orchestrator | Tuesday 17 February 2026 05:57:58 +0000 (0:00:01.254) 0:11:13.610 ****** 2026-02-17 05:57:59.536324 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:57:59.536336 | orchestrator | 2026-02-17 05:57:59.536349 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 05:57:59.536368 | orchestrator | Tuesday 17 February 2026 05:57:59 +0000 (0:00:01.187) 0:11:14.798 ****** 2026-02-17 05:58:40.481410 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-17 05:58:40.481624 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 05:58:40.481696 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-17 05:58:40.481714 | orchestrator | 2026-02-17 05:58:40.481732 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 05:58:40.481749 | orchestrator | Tuesday 17 February 2026 05:58:01 +0000 (0:00:01.749) 0:11:16.547 ****** 2026-02-17 05:58:40.481813 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 05:58:40.481830 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 05:58:40.481845 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 05:58:40.481861 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.481877 | orchestrator | 2026-02-17 05:58:40.481895 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 05:58:40.481911 | orchestrator | Tuesday 17 February 2026 05:58:02 +0000 (0:00:01.197) 0:11:17.744 ****** 2026-02-17 05:58:40.481929 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.481947 | orchestrator | 2026-02-17 05:58:40.481960 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 05:58:40.481975 | orchestrator | Tuesday 17 February 2026 05:58:03 +0000 (0:00:01.139) 0:11:18.884 ****** 2026-02-17 05:58:40.482126 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 05:58:40.482178 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 05:58:40.482225 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:58:40.482240 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 05:58:40.482253 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 05:58:40.482267 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 05:58:40.482281 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 05:58:40.482294 | orchestrator | 2026-02-17 05:58:40.482308 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 05:58:40.482320 | orchestrator | Tuesday 17 February 2026 05:58:05 +0000 (0:00:02.221) 0:11:21.105 ****** 2026-02-17 05:58:40.482333 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 05:58:40.482347 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 05:58:40.482362 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 05:58:40.482375 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 05:58:40.482430 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 05:58:40.482443 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 05:58:40.482456 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 05:58:40.482469 | orchestrator | 2026-02-17 05:58:40.482482 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-17 05:58:40.482497 | orchestrator | Tuesday 17 February 2026 05:58:08 +0000 (0:00:02.311) 0:11:23.416 ****** 2026-02-17 05:58:40.482512 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.482525 | orchestrator | 2026-02-17 05:58:40.482539 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-17 05:58:40.482570 | orchestrator | Tuesday 17 February 2026 05:58:09 +0000 (0:00:00.879) 0:11:24.295 ****** 2026-02-17 05:58:40.482584 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.482597 | orchestrator | 2026-02-17 05:58:40.482611 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-17 05:58:40.482674 | orchestrator | Tuesday 17 February 2026 05:58:09 +0000 (0:00:00.882) 0:11:25.178 ****** 2026-02-17 05:58:40.482688 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.482715 | orchestrator | 2026-02-17 05:58:40.482729 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-17 05:58:40.482742 | orchestrator | Tuesday 17 February 2026 05:58:10 +0000 (0:00:00.853) 0:11:26.031 ****** 2026-02-17 05:58:40.482756 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.482770 | orchestrator | 2026-02-17 05:58:40.482783 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-17 05:58:40.482795 | orchestrator | Tuesday 17 February 2026 05:58:12 +0000 (0:00:01.311) 0:11:27.343 ****** 2026-02-17 05:58:40.482808 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.482822 | orchestrator | 2026-02-17 05:58:40.482834 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-17 05:58:40.482883 | orchestrator | Tuesday 17 February 2026 05:58:12 +0000 (0:00:00.783) 0:11:28.127 ****** 2026-02-17 05:58:40.482898 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 05:58:40.482911 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 05:58:40.482925 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 05:58:40.482939 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.482953 | orchestrator | 2026-02-17 05:58:40.482968 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-17 05:58:40.482981 | orchestrator | Tuesday 17 February 2026 05:58:13 +0000 (0:00:01.134) 0:11:29.262 ****** 2026-02-17 05:58:40.482994 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-17 05:58:40.483008 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-17 05:58:40.483077 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-17 05:58:40.483094 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-17 05:58:40.483107 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-17 05:58:40.483120 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-17 05:58:40.483135 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.483148 | orchestrator | 2026-02-17 05:58:40.483161 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-17 05:58:40.483173 | orchestrator | Tuesday 17 February 2026 05:58:15 +0000 (0:00:01.405) 0:11:30.668 ****** 2026-02-17 05:58:40.483186 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 05:58:40.483199 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 05:58:40.483212 | orchestrator | 2026-02-17 05:58:40.483263 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-17 05:58:40.483277 | orchestrator | Tuesday 17 February 2026 05:58:19 +0000 (0:00:04.407) 0:11:35.075 ****** 2026-02-17 05:58:40.483291 | orchestrator | changed: [testbed-node-1] 2026-02-17 05:58:40.483306 | orchestrator | 2026-02-17 05:58:40.483319 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 05:58:40.483333 | orchestrator | Tuesday 17 February 2026 05:58:22 +0000 (0:00:02.262) 0:11:37.338 ****** 2026-02-17 05:58:40.483348 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-17 05:58:40.483364 | orchestrator | 2026-02-17 05:58:40.483377 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 05:58:40.483422 | orchestrator | Tuesday 17 February 2026 05:58:23 +0000 (0:00:01.104) 0:11:38.443 ****** 2026-02-17 05:58:40.483436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-17 05:58:40.483449 | orchestrator | 2026-02-17 05:58:40.483461 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 05:58:40.483473 | orchestrator | Tuesday 17 February 2026 05:58:24 +0000 (0:00:01.203) 0:11:39.646 ****** 2026-02-17 05:58:40.483487 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:58:40.483513 | orchestrator | 2026-02-17 05:58:40.483527 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 05:58:40.483540 | orchestrator | Tuesday 17 February 2026 05:58:25 +0000 (0:00:01.604) 0:11:41.250 ****** 2026-02-17 05:58:40.483555 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.483569 | orchestrator | 2026-02-17 05:58:40.483582 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 05:58:40.483695 | orchestrator | Tuesday 17 February 2026 05:58:27 +0000 (0:00:01.155) 0:11:42.406 ****** 2026-02-17 05:58:40.483715 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.483730 | orchestrator | 2026-02-17 05:58:40.483743 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 05:58:40.483756 | orchestrator | Tuesday 17 February 2026 05:58:28 +0000 (0:00:01.167) 0:11:43.574 ****** 2026-02-17 05:58:40.483769 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.483783 | orchestrator | 2026-02-17 05:58:40.483797 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 05:58:40.483847 | orchestrator | Tuesday 17 February 2026 05:58:29 +0000 (0:00:01.136) 0:11:44.710 ****** 2026-02-17 05:58:40.483862 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:58:40.483875 | orchestrator | 2026-02-17 05:58:40.483888 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 05:58:40.483901 | orchestrator | Tuesday 17 February 2026 05:58:30 +0000 (0:00:01.535) 0:11:46.245 ****** 2026-02-17 05:58:40.483914 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.483927 | orchestrator | 2026-02-17 05:58:40.483940 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 05:58:40.483954 | orchestrator | Tuesday 17 February 2026 05:58:32 +0000 (0:00:01.113) 0:11:47.359 ****** 2026-02-17 05:58:40.483967 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.483981 | orchestrator | 2026-02-17 05:58:40.483994 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 05:58:40.484008 | orchestrator | Tuesday 17 February 2026 05:58:33 +0000 (0:00:01.157) 0:11:48.516 ****** 2026-02-17 05:58:40.484022 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:58:40.484071 | orchestrator | 2026-02-17 05:58:40.484086 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 05:58:40.484100 | orchestrator | Tuesday 17 February 2026 05:58:34 +0000 (0:00:01.584) 0:11:50.101 ****** 2026-02-17 05:58:40.484113 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:58:40.484127 | orchestrator | 2026-02-17 05:58:40.484139 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 05:58:40.484154 | orchestrator | Tuesday 17 February 2026 05:58:36 +0000 (0:00:01.620) 0:11:51.721 ****** 2026-02-17 05:58:40.484166 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.484180 | orchestrator | 2026-02-17 05:58:40.484193 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 05:58:40.484207 | orchestrator | Tuesday 17 February 2026 05:58:37 +0000 (0:00:00.777) 0:11:52.498 ****** 2026-02-17 05:58:40.484254 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:58:40.484267 | orchestrator | 2026-02-17 05:58:40.484281 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 05:58:40.484432 | orchestrator | Tuesday 17 February 2026 05:58:38 +0000 (0:00:00.825) 0:11:53.323 ****** 2026-02-17 05:58:40.484465 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.484478 | orchestrator | 2026-02-17 05:58:40.484491 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 05:58:40.484504 | orchestrator | Tuesday 17 February 2026 05:58:38 +0000 (0:00:00.763) 0:11:54.087 ****** 2026-02-17 05:58:40.484518 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:58:40.484531 | orchestrator | 2026-02-17 05:58:40.484544 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 05:58:40.484557 | orchestrator | Tuesday 17 February 2026 05:58:39 +0000 (0:00:00.812) 0:11:54.900 ****** 2026-02-17 05:58:40.484587 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.131581 | orchestrator | 2026-02-17 05:59:21.131752 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 05:59:21.131773 | orchestrator | Tuesday 17 February 2026 05:58:40 +0000 (0:00:00.841) 0:11:55.742 ****** 2026-02-17 05:59:21.131786 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.131798 | orchestrator | 2026-02-17 05:59:21.131809 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 05:59:21.131820 | orchestrator | Tuesday 17 February 2026 05:58:41 +0000 (0:00:00.785) 0:11:56.527 ****** 2026-02-17 05:59:21.131831 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.131842 | orchestrator | 2026-02-17 05:59:21.131853 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 05:59:21.131864 | orchestrator | Tuesday 17 February 2026 05:58:42 +0000 (0:00:00.787) 0:11:57.315 ****** 2026-02-17 05:59:21.131876 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.131887 | orchestrator | 2026-02-17 05:59:21.131898 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 05:59:21.131909 | orchestrator | Tuesday 17 February 2026 05:58:42 +0000 (0:00:00.773) 0:11:58.089 ****** 2026-02-17 05:59:21.131920 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.131931 | orchestrator | 2026-02-17 05:59:21.131970 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 05:59:21.131981 | orchestrator | Tuesday 17 February 2026 05:58:43 +0000 (0:00:00.913) 0:11:59.002 ****** 2026-02-17 05:59:21.131992 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.132003 | orchestrator | 2026-02-17 05:59:21.132014 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 05:59:21.132025 | orchestrator | Tuesday 17 February 2026 05:58:44 +0000 (0:00:00.789) 0:11:59.792 ****** 2026-02-17 05:59:21.132036 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132047 | orchestrator | 2026-02-17 05:59:21.132058 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 05:59:21.132069 | orchestrator | Tuesday 17 February 2026 05:58:45 +0000 (0:00:00.752) 0:12:00.545 ****** 2026-02-17 05:59:21.132091 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132102 | orchestrator | 2026-02-17 05:59:21.132114 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 05:59:21.132126 | orchestrator | Tuesday 17 February 2026 05:58:46 +0000 (0:00:00.839) 0:12:01.385 ****** 2026-02-17 05:59:21.132139 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132151 | orchestrator | 2026-02-17 05:59:21.132164 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 05:59:21.132176 | orchestrator | Tuesday 17 February 2026 05:58:46 +0000 (0:00:00.807) 0:12:02.192 ****** 2026-02-17 05:59:21.132188 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132200 | orchestrator | 2026-02-17 05:59:21.132212 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 05:59:21.132225 | orchestrator | Tuesday 17 February 2026 05:58:47 +0000 (0:00:00.809) 0:12:03.002 ****** 2026-02-17 05:59:21.132237 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132249 | orchestrator | 2026-02-17 05:59:21.132261 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 05:59:21.132273 | orchestrator | Tuesday 17 February 2026 05:58:48 +0000 (0:00:00.799) 0:12:03.802 ****** 2026-02-17 05:59:21.132285 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132297 | orchestrator | 2026-02-17 05:59:21.132310 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 05:59:21.132322 | orchestrator | Tuesday 17 February 2026 05:58:49 +0000 (0:00:00.770) 0:12:04.572 ****** 2026-02-17 05:59:21.132335 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132347 | orchestrator | 2026-02-17 05:59:21.132360 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 05:59:21.132390 | orchestrator | Tuesday 17 February 2026 05:58:50 +0000 (0:00:00.776) 0:12:05.348 ****** 2026-02-17 05:59:21.132403 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132437 | orchestrator | 2026-02-17 05:59:21.132451 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 05:59:21.132464 | orchestrator | Tuesday 17 February 2026 05:58:50 +0000 (0:00:00.756) 0:12:06.104 ****** 2026-02-17 05:59:21.132477 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132488 | orchestrator | 2026-02-17 05:59:21.132499 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 05:59:21.132510 | orchestrator | Tuesday 17 February 2026 05:58:51 +0000 (0:00:00.767) 0:12:06.872 ****** 2026-02-17 05:59:21.132520 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132531 | orchestrator | 2026-02-17 05:59:21.132542 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 05:59:21.132553 | orchestrator | Tuesday 17 February 2026 05:58:52 +0000 (0:00:00.780) 0:12:07.653 ****** 2026-02-17 05:59:21.132564 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132574 | orchestrator | 2026-02-17 05:59:21.132585 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 05:59:21.132596 | orchestrator | Tuesday 17 February 2026 05:58:53 +0000 (0:00:00.772) 0:12:08.426 ****** 2026-02-17 05:59:21.132607 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132618 | orchestrator | 2026-02-17 05:59:21.132629 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 05:59:21.132640 | orchestrator | Tuesday 17 February 2026 05:58:53 +0000 (0:00:00.806) 0:12:09.233 ****** 2026-02-17 05:59:21.132651 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.132661 | orchestrator | 2026-02-17 05:59:21.132672 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 05:59:21.132683 | orchestrator | Tuesday 17 February 2026 05:58:55 +0000 (0:00:01.571) 0:12:10.804 ****** 2026-02-17 05:59:21.132712 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.132723 | orchestrator | 2026-02-17 05:59:21.132734 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 05:59:21.132745 | orchestrator | Tuesday 17 February 2026 05:58:57 +0000 (0:00:02.037) 0:12:12.841 ****** 2026-02-17 05:59:21.132756 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-17 05:59:21.132768 | orchestrator | 2026-02-17 05:59:21.132796 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 05:59:21.132808 | orchestrator | Tuesday 17 February 2026 05:58:58 +0000 (0:00:01.119) 0:12:13.961 ****** 2026-02-17 05:59:21.132819 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132829 | orchestrator | 2026-02-17 05:59:21.132840 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 05:59:21.132852 | orchestrator | Tuesday 17 February 2026 05:58:59 +0000 (0:00:01.171) 0:12:15.132 ****** 2026-02-17 05:59:21.132863 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.132873 | orchestrator | 2026-02-17 05:59:21.132884 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 05:59:21.132895 | orchestrator | Tuesday 17 February 2026 05:59:01 +0000 (0:00:01.194) 0:12:16.326 ****** 2026-02-17 05:59:21.132906 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 05:59:21.132917 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 05:59:21.132928 | orchestrator | 2026-02-17 05:59:21.132938 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 05:59:21.132949 | orchestrator | Tuesday 17 February 2026 05:59:02 +0000 (0:00:01.867) 0:12:18.194 ****** 2026-02-17 05:59:21.132960 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.132971 | orchestrator | 2026-02-17 05:59:21.132981 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 05:59:21.132992 | orchestrator | Tuesday 17 February 2026 05:59:04 +0000 (0:00:01.512) 0:12:19.706 ****** 2026-02-17 05:59:21.133003 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133014 | orchestrator | 2026-02-17 05:59:21.133025 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 05:59:21.133044 | orchestrator | Tuesday 17 February 2026 05:59:05 +0000 (0:00:01.167) 0:12:20.874 ****** 2026-02-17 05:59:21.133054 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133065 | orchestrator | 2026-02-17 05:59:21.133076 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 05:59:21.133087 | orchestrator | Tuesday 17 February 2026 05:59:06 +0000 (0:00:00.797) 0:12:21.671 ****** 2026-02-17 05:59:21.133098 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133109 | orchestrator | 2026-02-17 05:59:21.133120 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 05:59:21.133131 | orchestrator | Tuesday 17 February 2026 05:59:07 +0000 (0:00:00.797) 0:12:22.469 ****** 2026-02-17 05:59:21.133141 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-17 05:59:21.133152 | orchestrator | 2026-02-17 05:59:21.133163 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 05:59:21.133174 | orchestrator | Tuesday 17 February 2026 05:59:08 +0000 (0:00:01.146) 0:12:23.615 ****** 2026-02-17 05:59:21.133185 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.133196 | orchestrator | 2026-02-17 05:59:21.133207 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 05:59:21.133218 | orchestrator | Tuesday 17 February 2026 05:59:10 +0000 (0:00:01.747) 0:12:25.363 ****** 2026-02-17 05:59:21.133229 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 05:59:21.133240 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 05:59:21.133251 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 05:59:21.133261 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133272 | orchestrator | 2026-02-17 05:59:21.133283 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 05:59:21.133300 | orchestrator | Tuesday 17 February 2026 05:59:11 +0000 (0:00:01.185) 0:12:26.548 ****** 2026-02-17 05:59:21.133311 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133322 | orchestrator | 2026-02-17 05:59:21.133333 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 05:59:21.133344 | orchestrator | Tuesday 17 February 2026 05:59:12 +0000 (0:00:01.160) 0:12:27.708 ****** 2026-02-17 05:59:21.133355 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133366 | orchestrator | 2026-02-17 05:59:21.133376 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 05:59:21.133387 | orchestrator | Tuesday 17 February 2026 05:59:13 +0000 (0:00:01.210) 0:12:28.919 ****** 2026-02-17 05:59:21.133398 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133409 | orchestrator | 2026-02-17 05:59:21.133420 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 05:59:21.133431 | orchestrator | Tuesday 17 February 2026 05:59:14 +0000 (0:00:01.169) 0:12:30.088 ****** 2026-02-17 05:59:21.133442 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133453 | orchestrator | 2026-02-17 05:59:21.133464 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 05:59:21.133475 | orchestrator | Tuesday 17 February 2026 05:59:15 +0000 (0:00:01.137) 0:12:31.226 ****** 2026-02-17 05:59:21.133486 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:21.133497 | orchestrator | 2026-02-17 05:59:21.133507 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 05:59:21.133518 | orchestrator | Tuesday 17 February 2026 05:59:16 +0000 (0:00:00.794) 0:12:32.021 ****** 2026-02-17 05:59:21.133529 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.133540 | orchestrator | 2026-02-17 05:59:21.133551 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 05:59:21.133562 | orchestrator | Tuesday 17 February 2026 05:59:19 +0000 (0:00:02.287) 0:12:34.309 ****** 2026-02-17 05:59:21.133580 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:21.133590 | orchestrator | 2026-02-17 05:59:21.133601 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 05:59:21.133612 | orchestrator | Tuesday 17 February 2026 05:59:19 +0000 (0:00:00.799) 0:12:35.108 ****** 2026-02-17 05:59:21.133623 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-17 05:59:21.133634 | orchestrator | 2026-02-17 05:59:21.133651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 05:59:58.324285 | orchestrator | Tuesday 17 February 2026 05:59:21 +0000 (0:00:01.278) 0:12:36.386 ****** 2026-02-17 05:59:58.324419 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.324438 | orchestrator | 2026-02-17 05:59:58.324450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 05:59:58.324462 | orchestrator | Tuesday 17 February 2026 05:59:22 +0000 (0:00:01.141) 0:12:37.528 ****** 2026-02-17 05:59:58.324474 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.324485 | orchestrator | 2026-02-17 05:59:58.324496 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 05:59:58.324507 | orchestrator | Tuesday 17 February 2026 05:59:23 +0000 (0:00:01.204) 0:12:38.732 ****** 2026-02-17 05:59:58.324518 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.324529 | orchestrator | 2026-02-17 05:59:58.324540 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 05:59:58.324552 | orchestrator | Tuesday 17 February 2026 05:59:24 +0000 (0:00:01.136) 0:12:39.869 ****** 2026-02-17 05:59:58.324563 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.324574 | orchestrator | 2026-02-17 05:59:58.324585 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 05:59:58.324595 | orchestrator | Tuesday 17 February 2026 05:59:25 +0000 (0:00:01.153) 0:12:41.023 ****** 2026-02-17 05:59:58.324606 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.324617 | orchestrator | 2026-02-17 05:59:58.324628 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 05:59:58.324639 | orchestrator | Tuesday 17 February 2026 05:59:26 +0000 (0:00:01.201) 0:12:42.225 ****** 2026-02-17 05:59:58.324650 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.324660 | orchestrator | 2026-02-17 05:59:58.324671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 05:59:58.324682 | orchestrator | Tuesday 17 February 2026 05:59:28 +0000 (0:00:01.214) 0:12:43.440 ****** 2026-02-17 05:59:58.324693 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.324704 | orchestrator | 2026-02-17 05:59:58.324715 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 05:59:58.324726 | orchestrator | Tuesday 17 February 2026 05:59:29 +0000 (0:00:01.182) 0:12:44.622 ****** 2026-02-17 05:59:58.324736 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.324776 | orchestrator | 2026-02-17 05:59:58.324795 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 05:59:58.324808 | orchestrator | Tuesday 17 February 2026 05:59:30 +0000 (0:00:01.191) 0:12:45.814 ****** 2026-02-17 05:59:58.324820 | orchestrator | ok: [testbed-node-1] 2026-02-17 05:59:58.324834 | orchestrator | 2026-02-17 05:59:58.324847 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 05:59:58.324860 | orchestrator | Tuesday 17 February 2026 05:59:31 +0000 (0:00:00.798) 0:12:46.613 ****** 2026-02-17 05:59:58.324872 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-17 05:59:58.324885 | orchestrator | 2026-02-17 05:59:58.324898 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 05:59:58.324911 | orchestrator | Tuesday 17 February 2026 05:59:32 +0000 (0:00:01.186) 0:12:47.800 ****** 2026-02-17 05:59:58.324924 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-17 05:59:58.324937 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-17 05:59:58.324949 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-17 05:59:58.324990 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-17 05:59:58.325016 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-17 05:59:58.325060 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-17 05:59:58.325079 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-17 05:59:58.325095 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-17 05:59:58.325112 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 05:59:58.325131 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 05:59:58.325149 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 05:59:58.325168 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 05:59:58.325187 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 05:59:58.325205 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 05:59:58.325223 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-17 05:59:58.325238 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-17 05:59:58.325249 | orchestrator | 2026-02-17 05:59:58.325260 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 05:59:58.325270 | orchestrator | Tuesday 17 February 2026 05:59:38 +0000 (0:00:06.430) 0:12:54.230 ****** 2026-02-17 05:59:58.325281 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325292 | orchestrator | 2026-02-17 05:59:58.325303 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 05:59:58.325314 | orchestrator | Tuesday 17 February 2026 05:59:39 +0000 (0:00:00.778) 0:12:55.009 ****** 2026-02-17 05:59:58.325325 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325336 | orchestrator | 2026-02-17 05:59:58.325347 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 05:59:58.325358 | orchestrator | Tuesday 17 February 2026 05:59:40 +0000 (0:00:00.926) 0:12:55.935 ****** 2026-02-17 05:59:58.325369 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325380 | orchestrator | 2026-02-17 05:59:58.325391 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 05:59:58.325402 | orchestrator | Tuesday 17 February 2026 05:59:41 +0000 (0:00:00.840) 0:12:56.775 ****** 2026-02-17 05:59:58.325413 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325424 | orchestrator | 2026-02-17 05:59:58.325435 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 05:59:58.325467 | orchestrator | Tuesday 17 February 2026 05:59:42 +0000 (0:00:00.841) 0:12:57.617 ****** 2026-02-17 05:59:58.325478 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325491 | orchestrator | 2026-02-17 05:59:58.325516 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 05:59:58.325539 | orchestrator | Tuesday 17 February 2026 05:59:43 +0000 (0:00:00.791) 0:12:58.408 ****** 2026-02-17 05:59:58.325557 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325574 | orchestrator | 2026-02-17 05:59:58.325592 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 05:59:58.325610 | orchestrator | Tuesday 17 February 2026 05:59:43 +0000 (0:00:00.811) 0:12:59.220 ****** 2026-02-17 05:59:58.325627 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325644 | orchestrator | 2026-02-17 05:59:58.325661 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 05:59:58.325681 | orchestrator | Tuesday 17 February 2026 05:59:44 +0000 (0:00:00.818) 0:13:00.039 ****** 2026-02-17 05:59:58.325701 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325719 | orchestrator | 2026-02-17 05:59:58.325737 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 05:59:58.325801 | orchestrator | Tuesday 17 February 2026 05:59:45 +0000 (0:00:00.793) 0:13:00.832 ****** 2026-02-17 05:59:58.325839 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325859 | orchestrator | 2026-02-17 05:59:58.325876 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 05:59:58.325894 | orchestrator | Tuesday 17 February 2026 05:59:46 +0000 (0:00:00.774) 0:13:01.606 ****** 2026-02-17 05:59:58.325913 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.325932 | orchestrator | 2026-02-17 05:59:58.325960 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 05:59:58.325981 | orchestrator | Tuesday 17 February 2026 05:59:47 +0000 (0:00:00.774) 0:13:02.381 ****** 2026-02-17 05:59:58.325998 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326096 | orchestrator | 2026-02-17 05:59:58.326124 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 05:59:58.326143 | orchestrator | Tuesday 17 February 2026 05:59:47 +0000 (0:00:00.773) 0:13:03.154 ****** 2026-02-17 05:59:58.326161 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326179 | orchestrator | 2026-02-17 05:59:58.326197 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 05:59:58.326216 | orchestrator | Tuesday 17 February 2026 05:59:48 +0000 (0:00:00.769) 0:13:03.924 ****** 2026-02-17 05:59:58.326233 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326251 | orchestrator | 2026-02-17 05:59:58.326268 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 05:59:58.326286 | orchestrator | Tuesday 17 February 2026 05:59:49 +0000 (0:00:00.873) 0:13:04.797 ****** 2026-02-17 05:59:58.326303 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326322 | orchestrator | 2026-02-17 05:59:58.326339 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 05:59:58.326359 | orchestrator | Tuesday 17 February 2026 05:59:50 +0000 (0:00:00.787) 0:13:05.584 ****** 2026-02-17 05:59:58.326379 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326396 | orchestrator | 2026-02-17 05:59:58.326414 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 05:59:58.326432 | orchestrator | Tuesday 17 February 2026 05:59:51 +0000 (0:00:00.870) 0:13:06.455 ****** 2026-02-17 05:59:58.326449 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326467 | orchestrator | 2026-02-17 05:59:58.326484 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 05:59:58.326514 | orchestrator | Tuesday 17 February 2026 05:59:52 +0000 (0:00:00.857) 0:13:07.312 ****** 2026-02-17 05:59:58.326534 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326553 | orchestrator | 2026-02-17 05:59:58.326572 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 05:59:58.326593 | orchestrator | Tuesday 17 February 2026 05:59:52 +0000 (0:00:00.775) 0:13:08.088 ****** 2026-02-17 05:59:58.326610 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326629 | orchestrator | 2026-02-17 05:59:58.326641 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 05:59:58.326652 | orchestrator | Tuesday 17 February 2026 05:59:53 +0000 (0:00:00.824) 0:13:08.912 ****** 2026-02-17 05:59:58.326663 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326673 | orchestrator | 2026-02-17 05:59:58.326684 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 05:59:58.326695 | orchestrator | Tuesday 17 February 2026 05:59:54 +0000 (0:00:00.897) 0:13:09.810 ****** 2026-02-17 05:59:58.326706 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326716 | orchestrator | 2026-02-17 05:59:58.326727 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 05:59:58.326833 | orchestrator | Tuesday 17 February 2026 05:59:55 +0000 (0:00:00.809) 0:13:10.620 ****** 2026-02-17 05:59:58.326848 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326859 | orchestrator | 2026-02-17 05:59:58.326870 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 05:59:58.326893 | orchestrator | Tuesday 17 February 2026 05:59:56 +0000 (0:00:00.774) 0:13:11.395 ****** 2026-02-17 05:59:58.326904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 05:59:58.326915 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 05:59:58.326926 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 05:59:58.326937 | orchestrator | skipping: [testbed-node-1] 2026-02-17 05:59:58.326947 | orchestrator | 2026-02-17 05:59:58.326958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 05:59:58.326969 | orchestrator | Tuesday 17 February 2026 05:59:57 +0000 (0:00:01.128) 0:13:12.524 ****** 2026-02-17 05:59:58.326980 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 05:59:58.327007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 06:01:25.961078 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 06:01:25.961214 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.961233 | orchestrator | 2026-02-17 06:01:25.961250 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:01:25.961266 | orchestrator | Tuesday 17 February 2026 05:59:58 +0000 (0:00:01.057) 0:13:13.582 ****** 2026-02-17 06:01:25.961280 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 06:01:25.961294 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 06:01:25.961308 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 06:01:25.961322 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.961335 | orchestrator | 2026-02-17 06:01:25.961350 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:01:25.961365 | orchestrator | Tuesday 17 February 2026 05:59:59 +0000 (0:00:01.126) 0:13:14.708 ****** 2026-02-17 06:01:25.961379 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.961393 | orchestrator | 2026-02-17 06:01:25.961407 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:01:25.961421 | orchestrator | Tuesday 17 February 2026 06:00:00 +0000 (0:00:00.786) 0:13:15.495 ****** 2026-02-17 06:01:25.961438 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-17 06:01:25.961452 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.961466 | orchestrator | 2026-02-17 06:01:25.961482 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:01:25.961497 | orchestrator | Tuesday 17 February 2026 06:00:01 +0000 (0:00:00.981) 0:13:16.476 ****** 2026-02-17 06:01:25.961509 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.961518 | orchestrator | 2026-02-17 06:01:25.961527 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:01:25.961536 | orchestrator | Tuesday 17 February 2026 06:00:02 +0000 (0:00:01.565) 0:13:18.042 ****** 2026-02-17 06:01:25.961545 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.961554 | orchestrator | 2026-02-17 06:01:25.961564 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-17 06:01:25.961573 | orchestrator | Tuesday 17 February 2026 06:00:03 +0000 (0:00:00.807) 0:13:18.849 ****** 2026-02-17 06:01:25.961582 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-17 06:01:25.961592 | orchestrator | 2026-02-17 06:01:25.961602 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-17 06:01:25.961612 | orchestrator | Tuesday 17 February 2026 06:00:04 +0000 (0:00:01.177) 0:13:20.027 ****** 2026-02-17 06:01:25.961622 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:01:25.961632 | orchestrator | 2026-02-17 06:01:25.961643 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-17 06:01:25.961653 | orchestrator | Tuesday 17 February 2026 06:00:07 +0000 (0:00:03.152) 0:13:23.180 ****** 2026-02-17 06:01:25.961664 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.961674 | orchestrator | 2026-02-17 06:01:25.961710 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-17 06:01:25.961721 | orchestrator | Tuesday 17 February 2026 06:00:09 +0000 (0:00:01.284) 0:13:24.464 ****** 2026-02-17 06:01:25.961730 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.961741 | orchestrator | 2026-02-17 06:01:25.961752 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-17 06:01:25.961762 | orchestrator | Tuesday 17 February 2026 06:00:10 +0000 (0:00:01.236) 0:13:25.701 ****** 2026-02-17 06:01:25.961771 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.961781 | orchestrator | 2026-02-17 06:01:25.961806 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-17 06:01:25.961816 | orchestrator | Tuesday 17 February 2026 06:00:11 +0000 (0:00:01.202) 0:13:26.903 ****** 2026-02-17 06:01:25.961828 | orchestrator | changed: [testbed-node-1] 2026-02-17 06:01:25.961838 | orchestrator | 2026-02-17 06:01:25.961849 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-17 06:01:25.961859 | orchestrator | Tuesday 17 February 2026 06:00:13 +0000 (0:00:02.084) 0:13:28.988 ****** 2026-02-17 06:01:25.961898 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.961909 | orchestrator | 2026-02-17 06:01:25.961919 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-17 06:01:25.961930 | orchestrator | Tuesday 17 February 2026 06:00:15 +0000 (0:00:01.697) 0:13:30.685 ****** 2026-02-17 06:01:25.961941 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.961951 | orchestrator | 2026-02-17 06:01:25.961962 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-17 06:01:25.961972 | orchestrator | Tuesday 17 February 2026 06:00:16 +0000 (0:00:01.514) 0:13:32.199 ****** 2026-02-17 06:01:25.961981 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.961990 | orchestrator | 2026-02-17 06:01:25.961999 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-17 06:01:25.962006 | orchestrator | Tuesday 17 February 2026 06:00:18 +0000 (0:00:01.540) 0:13:33.739 ****** 2026-02-17 06:01:25.962060 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:01:25.962071 | orchestrator | 2026-02-17 06:01:25.962079 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-17 06:01:25.962087 | orchestrator | Tuesday 17 February 2026 06:00:20 +0000 (0:00:01.565) 0:13:35.305 ****** 2026-02-17 06:01:25.962096 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:01:25.962103 | orchestrator | 2026-02-17 06:01:25.962111 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-17 06:01:25.962119 | orchestrator | Tuesday 17 February 2026 06:00:21 +0000 (0:00:01.622) 0:13:36.927 ****** 2026-02-17 06:01:25.962127 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:01:25.962135 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-17 06:01:25.962143 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 06:01:25.962152 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-17 06:01:25.962160 | orchestrator | 2026-02-17 06:01:25.962192 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-17 06:01:25.962207 | orchestrator | Tuesday 17 February 2026 06:00:25 +0000 (0:00:03.981) 0:13:40.909 ****** 2026-02-17 06:01:25.962219 | orchestrator | changed: [testbed-node-1] 2026-02-17 06:01:25.962233 | orchestrator | 2026-02-17 06:01:25.962246 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-17 06:01:25.962258 | orchestrator | Tuesday 17 February 2026 06:00:27 +0000 (0:00:02.065) 0:13:42.974 ****** 2026-02-17 06:01:25.962270 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.962283 | orchestrator | 2026-02-17 06:01:25.962297 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-17 06:01:25.962310 | orchestrator | Tuesday 17 February 2026 06:00:28 +0000 (0:00:01.141) 0:13:44.115 ****** 2026-02-17 06:01:25.962322 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.962333 | orchestrator | 2026-02-17 06:01:25.962360 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-17 06:01:25.962373 | orchestrator | Tuesday 17 February 2026 06:00:29 +0000 (0:00:01.132) 0:13:45.248 ****** 2026-02-17 06:01:25.962386 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.962398 | orchestrator | 2026-02-17 06:01:25.962412 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-17 06:01:25.962424 | orchestrator | Tuesday 17 February 2026 06:00:31 +0000 (0:00:01.765) 0:13:47.013 ****** 2026-02-17 06:01:25.962436 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.962449 | orchestrator | 2026-02-17 06:01:25.962460 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-17 06:01:25.962471 | orchestrator | Tuesday 17 February 2026 06:00:33 +0000 (0:00:01.490) 0:13:48.503 ****** 2026-02-17 06:01:25.962483 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.962495 | orchestrator | 2026-02-17 06:01:25.962506 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-17 06:01:25.962518 | orchestrator | Tuesday 17 February 2026 06:00:34 +0000 (0:00:00.798) 0:13:49.302 ****** 2026-02-17 06:01:25.962530 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-17 06:01:25.962543 | orchestrator | 2026-02-17 06:01:25.962554 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-17 06:01:25.962566 | orchestrator | Tuesday 17 February 2026 06:00:35 +0000 (0:00:01.180) 0:13:50.483 ****** 2026-02-17 06:01:25.962578 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.962590 | orchestrator | 2026-02-17 06:01:25.962603 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-17 06:01:25.962615 | orchestrator | Tuesday 17 February 2026 06:00:36 +0000 (0:00:01.144) 0:13:51.627 ****** 2026-02-17 06:01:25.962627 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.962640 | orchestrator | 2026-02-17 06:01:25.962653 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-17 06:01:25.962665 | orchestrator | Tuesday 17 February 2026 06:00:37 +0000 (0:00:01.143) 0:13:52.771 ****** 2026-02-17 06:01:25.962677 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-17 06:01:25.962691 | orchestrator | 2026-02-17 06:01:25.962704 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-17 06:01:25.962717 | orchestrator | Tuesday 17 February 2026 06:00:38 +0000 (0:00:01.256) 0:13:54.028 ****** 2026-02-17 06:01:25.962731 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.962744 | orchestrator | 2026-02-17 06:01:25.962758 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-17 06:01:25.962771 | orchestrator | Tuesday 17 February 2026 06:00:41 +0000 (0:00:02.345) 0:13:56.373 ****** 2026-02-17 06:01:25.962782 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.962795 | orchestrator | 2026-02-17 06:01:25.962818 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-17 06:01:25.962832 | orchestrator | Tuesday 17 February 2026 06:00:43 +0000 (0:00:01.960) 0:13:58.333 ****** 2026-02-17 06:01:25.962845 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.962858 | orchestrator | 2026-02-17 06:01:25.962896 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-17 06:01:25.962909 | orchestrator | Tuesday 17 February 2026 06:00:45 +0000 (0:00:02.526) 0:14:00.860 ****** 2026-02-17 06:01:25.962921 | orchestrator | changed: [testbed-node-1] 2026-02-17 06:01:25.962934 | orchestrator | 2026-02-17 06:01:25.962948 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-17 06:01:25.962960 | orchestrator | Tuesday 17 February 2026 06:00:48 +0000 (0:00:03.007) 0:14:03.868 ****** 2026-02-17 06:01:25.962974 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-17 06:01:25.962987 | orchestrator | 2026-02-17 06:01:25.963000 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-17 06:01:25.963012 | orchestrator | Tuesday 17 February 2026 06:00:49 +0000 (0:00:01.137) 0:14:05.006 ****** 2026-02-17 06:01:25.963036 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-17 06:01:25.963050 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.963063 | orchestrator | 2026-02-17 06:01:25.963077 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-17 06:01:25.963091 | orchestrator | Tuesday 17 February 2026 06:01:12 +0000 (0:00:22.881) 0:14:27.888 ****** 2026-02-17 06:01:25.963104 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:25.963117 | orchestrator | 2026-02-17 06:01:25.963131 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-17 06:01:25.963145 | orchestrator | Tuesday 17 February 2026 06:01:15 +0000 (0:00:02.687) 0:14:30.575 ****** 2026-02-17 06:01:25.963158 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:25.963172 | orchestrator | 2026-02-17 06:01:25.963185 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-17 06:01:25.963199 | orchestrator | Tuesday 17 February 2026 06:01:16 +0000 (0:00:00.846) 0:14:31.422 ****** 2026-02-17 06:01:25.963228 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-17 06:01:58.560434 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-17 06:01:58.560577 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-17 06:01:58.560604 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-17 06:01:58.560624 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-17 06:01:58.560643 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}])  2026-02-17 06:01:58.560839 | orchestrator | 2026-02-17 06:01:58.560861 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-17 06:01:58.560879 | orchestrator | Tuesday 17 February 2026 06:01:25 +0000 (0:00:09.797) 0:14:41.220 ****** 2026-02-17 06:01:58.560895 | orchestrator | changed: [testbed-node-1] 2026-02-17 06:01:58.560942 | orchestrator | 2026-02-17 06:01:58.560961 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:01:58.561031 | orchestrator | Tuesday 17 February 2026 06:01:28 +0000 (0:00:02.207) 0:14:43.427 ****** 2026-02-17 06:01:58.561052 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:01:58.561071 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-17 06:01:58.561088 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-17 06:01:58.561105 | orchestrator | 2026-02-17 06:01:58.561123 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:01:58.561140 | orchestrator | Tuesday 17 February 2026 06:01:29 +0000 (0:00:01.583) 0:14:45.011 ****** 2026-02-17 06:01:58.561158 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 06:01:58.561176 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 06:01:58.561193 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 06:01:58.561212 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:58.561229 | orchestrator | 2026-02-17 06:01:58.561246 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-17 06:01:58.561263 | orchestrator | Tuesday 17 February 2026 06:01:30 +0000 (0:00:01.115) 0:14:46.127 ****** 2026-02-17 06:01:58.561281 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:01:58.561298 | orchestrator | 2026-02-17 06:01:58.561314 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-17 06:01:58.561331 | orchestrator | Tuesday 17 February 2026 06:01:31 +0000 (0:00:00.789) 0:14:46.916 ****** 2026-02-17 06:01:58.561349 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:01:58.561366 | orchestrator | 2026-02-17 06:01:58.561383 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-17 06:01:58.561400 | orchestrator | 2026-02-17 06:01:58.561415 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-17 06:01:58.561433 | orchestrator | Tuesday 17 February 2026 06:01:33 +0000 (0:00:02.096) 0:14:49.012 ****** 2026-02-17 06:01:58.561448 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.561465 | orchestrator | 2026-02-17 06:01:58.561481 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-17 06:01:58.561575 | orchestrator | Tuesday 17 February 2026 06:01:34 +0000 (0:00:01.117) 0:14:50.129 ****** 2026-02-17 06:01:58.561596 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.561612 | orchestrator | 2026-02-17 06:01:58.561628 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-17 06:01:58.561645 | orchestrator | Tuesday 17 February 2026 06:01:35 +0000 (0:00:00.807) 0:14:50.936 ****** 2026-02-17 06:01:58.561662 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:01:58.561678 | orchestrator | 2026-02-17 06:01:58.561720 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-17 06:01:58.561737 | orchestrator | Tuesday 17 February 2026 06:01:36 +0000 (0:00:00.773) 0:14:51.710 ****** 2026-02-17 06:01:58.561754 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.561769 | orchestrator | 2026-02-17 06:01:58.561785 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:01:58.561802 | orchestrator | Tuesday 17 February 2026 06:01:37 +0000 (0:00:00.787) 0:14:52.498 ****** 2026-02-17 06:01:58.561818 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-17 06:01:58.561835 | orchestrator | 2026-02-17 06:01:58.561851 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:01:58.561867 | orchestrator | Tuesday 17 February 2026 06:01:38 +0000 (0:00:01.157) 0:14:53.656 ****** 2026-02-17 06:01:58.561883 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.561900 | orchestrator | 2026-02-17 06:01:58.561956 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:01:58.561973 | orchestrator | Tuesday 17 February 2026 06:01:39 +0000 (0:00:01.467) 0:14:55.123 ****** 2026-02-17 06:01:58.561988 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.562004 | orchestrator | 2026-02-17 06:01:58.562116 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:01:58.562135 | orchestrator | Tuesday 17 February 2026 06:01:41 +0000 (0:00:01.176) 0:14:56.300 ****** 2026-02-17 06:01:58.562151 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.562179 | orchestrator | 2026-02-17 06:01:58.562193 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:01:58.562209 | orchestrator | Tuesday 17 February 2026 06:01:42 +0000 (0:00:01.507) 0:14:57.807 ****** 2026-02-17 06:01:58.562226 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.562242 | orchestrator | 2026-02-17 06:01:58.562257 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:01:58.562274 | orchestrator | Tuesday 17 February 2026 06:01:43 +0000 (0:00:01.127) 0:14:58.934 ****** 2026-02-17 06:01:58.562289 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.562306 | orchestrator | 2026-02-17 06:01:58.562323 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:01:58.562339 | orchestrator | Tuesday 17 February 2026 06:01:44 +0000 (0:00:01.189) 0:15:00.124 ****** 2026-02-17 06:01:58.562354 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.562368 | orchestrator | 2026-02-17 06:01:58.562384 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:01:58.562399 | orchestrator | Tuesday 17 February 2026 06:01:45 +0000 (0:00:01.143) 0:15:01.268 ****** 2026-02-17 06:01:58.562415 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:01:58.562431 | orchestrator | 2026-02-17 06:01:58.562447 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:01:58.562461 | orchestrator | Tuesday 17 February 2026 06:01:47 +0000 (0:00:01.148) 0:15:02.416 ****** 2026-02-17 06:01:58.562475 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.562491 | orchestrator | 2026-02-17 06:01:58.562507 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:01:58.562522 | orchestrator | Tuesday 17 February 2026 06:01:48 +0000 (0:00:01.146) 0:15:03.563 ****** 2026-02-17 06:01:58.562539 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:01:58.562567 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:01:58.562583 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:01:58.562598 | orchestrator | 2026-02-17 06:01:58.562613 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:01:58.562629 | orchestrator | Tuesday 17 February 2026 06:01:50 +0000 (0:00:02.069) 0:15:05.633 ****** 2026-02-17 06:01:58.562646 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:01:58.562661 | orchestrator | 2026-02-17 06:01:58.562678 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:01:58.562692 | orchestrator | Tuesday 17 February 2026 06:01:51 +0000 (0:00:01.314) 0:15:06.948 ****** 2026-02-17 06:01:58.562708 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:01:58.562724 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:01:58.562741 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:01:58.562757 | orchestrator | 2026-02-17 06:01:58.562772 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:01:58.562788 | orchestrator | Tuesday 17 February 2026 06:01:54 +0000 (0:00:03.167) 0:15:10.115 ****** 2026-02-17 06:01:58.562805 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 06:01:58.562821 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 06:01:58.562837 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 06:01:58.562853 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:01:58.562869 | orchestrator | 2026-02-17 06:01:58.562885 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:01:58.562902 | orchestrator | Tuesday 17 February 2026 06:01:56 +0000 (0:00:01.756) 0:15:11.871 ****** 2026-02-17 06:01:58.563041 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:01:58.563062 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:01:58.563100 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:02:19.556984 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.557112 | orchestrator | 2026-02-17 06:02:19.557140 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:02:19.557161 | orchestrator | Tuesday 17 February 2026 06:01:58 +0000 (0:00:01.945) 0:15:13.816 ****** 2026-02-17 06:02:19.557182 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:19.557207 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:19.557228 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:19.557249 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.557268 | orchestrator | 2026-02-17 06:02:19.557287 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:02:19.557299 | orchestrator | Tuesday 17 February 2026 06:01:59 +0000 (0:00:01.185) 0:15:15.002 ****** 2026-02-17 06:02:19.557331 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:01:52.195730', 'end': '2026-02-17 06:01:52.233002', 'delta': '0:00:00.037272', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:02:19.557348 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:01:53.098937', 'end': '2026-02-17 06:01:53.147989', 'delta': '0:00:00.049052', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:02:19.557388 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '4f72f9ce519e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:01:53.630850', 'end': '2026-02-17 06:01:53.678023', 'delta': '0:00:00.047173', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f72f9ce519e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:02:19.557400 | orchestrator | 2026-02-17 06:02:19.557412 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:02:19.557441 | orchestrator | Tuesday 17 February 2026 06:02:00 +0000 (0:00:01.227) 0:15:16.230 ****** 2026-02-17 06:02:19.557453 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:02:19.557465 | orchestrator | 2026-02-17 06:02:19.557478 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:02:19.557491 | orchestrator | Tuesday 17 February 2026 06:02:02 +0000 (0:00:01.308) 0:15:17.539 ****** 2026-02-17 06:02:19.557505 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.557517 | orchestrator | 2026-02-17 06:02:19.557531 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:02:19.557543 | orchestrator | Tuesday 17 February 2026 06:02:03 +0000 (0:00:01.248) 0:15:18.787 ****** 2026-02-17 06:02:19.557556 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:02:19.557569 | orchestrator | 2026-02-17 06:02:19.557582 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:02:19.557594 | orchestrator | Tuesday 17 February 2026 06:02:04 +0000 (0:00:01.139) 0:15:19.927 ****** 2026-02-17 06:02:19.557607 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:02:19.557621 | orchestrator | 2026-02-17 06:02:19.557634 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:02:19.557646 | orchestrator | Tuesday 17 February 2026 06:02:06 +0000 (0:00:01.957) 0:15:21.884 ****** 2026-02-17 06:02:19.557659 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:02:19.557671 | orchestrator | 2026-02-17 06:02:19.557685 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:02:19.557697 | orchestrator | Tuesday 17 February 2026 06:02:07 +0000 (0:00:01.150) 0:15:23.034 ****** 2026-02-17 06:02:19.557710 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.557723 | orchestrator | 2026-02-17 06:02:19.557736 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:02:19.557749 | orchestrator | Tuesday 17 February 2026 06:02:08 +0000 (0:00:01.129) 0:15:24.164 ****** 2026-02-17 06:02:19.557762 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.557774 | orchestrator | 2026-02-17 06:02:19.557787 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:02:19.557800 | orchestrator | Tuesday 17 February 2026 06:02:10 +0000 (0:00:01.238) 0:15:25.403 ****** 2026-02-17 06:02:19.557813 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.557825 | orchestrator | 2026-02-17 06:02:19.557838 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:02:19.557849 | orchestrator | Tuesday 17 February 2026 06:02:11 +0000 (0:00:01.132) 0:15:26.535 ****** 2026-02-17 06:02:19.557860 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.557871 | orchestrator | 2026-02-17 06:02:19.557883 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:02:19.557903 | orchestrator | Tuesday 17 February 2026 06:02:12 +0000 (0:00:01.167) 0:15:27.703 ****** 2026-02-17 06:02:19.557915 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.557926 | orchestrator | 2026-02-17 06:02:19.557992 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:02:19.558006 | orchestrator | Tuesday 17 February 2026 06:02:13 +0000 (0:00:01.211) 0:15:28.914 ****** 2026-02-17 06:02:19.558075 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.558088 | orchestrator | 2026-02-17 06:02:19.558106 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:02:19.558118 | orchestrator | Tuesday 17 February 2026 06:02:14 +0000 (0:00:01.146) 0:15:30.061 ****** 2026-02-17 06:02:19.558129 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.558140 | orchestrator | 2026-02-17 06:02:19.558156 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:02:19.558175 | orchestrator | Tuesday 17 February 2026 06:02:15 +0000 (0:00:01.183) 0:15:31.245 ****** 2026-02-17 06:02:19.558193 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.558211 | orchestrator | 2026-02-17 06:02:19.558229 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:02:19.558247 | orchestrator | Tuesday 17 February 2026 06:02:17 +0000 (0:00:01.170) 0:15:32.415 ****** 2026-02-17 06:02:19.558266 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:19.558283 | orchestrator | 2026-02-17 06:02:19.558301 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:02:19.558320 | orchestrator | Tuesday 17 February 2026 06:02:18 +0000 (0:00:01.130) 0:15:33.545 ****** 2026-02-17 06:02:19.558340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:02:19.558356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:02:19.558387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:02:20.852697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:02:20.852821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:02:20.852875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:02:20.852894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:02:20.853009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f3163655', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:02:20.853061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:02:20.853083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:02:20.853095 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:20.853119 | orchestrator | 2026-02-17 06:02:20.853131 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:02:20.853143 | orchestrator | Tuesday 17 February 2026 06:02:19 +0000 (0:00:01.263) 0:15:34.809 ****** 2026-02-17 06:02:20.853156 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:20.853176 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:20.853188 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:20.853201 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:20.853221 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:37.343536 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:37.343756 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:37.343793 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f3163655', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:37.343829 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:37.343843 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:02:37.343864 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:37.343877 | orchestrator | 2026-02-17 06:02:37.343890 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:02:37.343903 | orchestrator | Tuesday 17 February 2026 06:02:20 +0000 (0:00:01.305) 0:15:36.115 ****** 2026-02-17 06:02:37.343914 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:02:37.343926 | orchestrator | 2026-02-17 06:02:37.343938 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:02:37.343949 | orchestrator | Tuesday 17 February 2026 06:02:22 +0000 (0:00:01.506) 0:15:37.621 ****** 2026-02-17 06:02:37.344007 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:02:37.344019 | orchestrator | 2026-02-17 06:02:37.344030 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:02:37.344041 | orchestrator | Tuesday 17 February 2026 06:02:23 +0000 (0:00:01.122) 0:15:38.744 ****** 2026-02-17 06:02:37.344052 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:02:37.344063 | orchestrator | 2026-02-17 06:02:37.344075 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:02:37.344086 | orchestrator | Tuesday 17 February 2026 06:02:25 +0000 (0:00:01.588) 0:15:40.332 ****** 2026-02-17 06:02:37.344097 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:37.344109 | orchestrator | 2026-02-17 06:02:37.344120 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:02:37.344132 | orchestrator | Tuesday 17 February 2026 06:02:26 +0000 (0:00:01.186) 0:15:41.519 ****** 2026-02-17 06:02:37.344143 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:37.344154 | orchestrator | 2026-02-17 06:02:37.344165 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:02:37.344177 | orchestrator | Tuesday 17 February 2026 06:02:27 +0000 (0:00:01.272) 0:15:42.792 ****** 2026-02-17 06:02:37.344188 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:37.344199 | orchestrator | 2026-02-17 06:02:37.344215 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:02:37.344227 | orchestrator | Tuesday 17 February 2026 06:02:28 +0000 (0:00:01.206) 0:15:43.998 ****** 2026-02-17 06:02:37.344239 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-17 06:02:37.344250 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-17 06:02:37.344261 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:02:37.344272 | orchestrator | 2026-02-17 06:02:37.344284 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:02:37.344295 | orchestrator | Tuesday 17 February 2026 06:02:30 +0000 (0:00:02.041) 0:15:46.040 ****** 2026-02-17 06:02:37.344306 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 06:02:37.344318 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 06:02:37.344329 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 06:02:37.344340 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:37.344351 | orchestrator | 2026-02-17 06:02:37.344362 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:02:37.344374 | orchestrator | Tuesday 17 February 2026 06:02:31 +0000 (0:00:01.174) 0:15:47.214 ****** 2026-02-17 06:02:37.344385 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:02:37.344396 | orchestrator | 2026-02-17 06:02:37.344407 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:02:37.344418 | orchestrator | Tuesday 17 February 2026 06:02:33 +0000 (0:00:01.232) 0:15:48.447 ****** 2026-02-17 06:02:37.344475 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:02:37.344497 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:02:37.344508 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:02:37.344519 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:02:37.344531 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:02:37.344542 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:02:37.344553 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:02:37.344565 | orchestrator | 2026-02-17 06:02:37.344576 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:02:37.344587 | orchestrator | Tuesday 17 February 2026 06:02:35 +0000 (0:00:01.870) 0:15:50.318 ****** 2026-02-17 06:02:37.344598 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:02:37.344609 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:02:37.344621 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:02:37.344640 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:03:17.536876 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:03:17.536980 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:03:17.537022 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:03:17.537033 | orchestrator | 2026-02-17 06:03:17.537054 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-17 06:03:17.537065 | orchestrator | Tuesday 17 February 2026 06:02:37 +0000 (0:00:02.282) 0:15:52.601 ****** 2026-02-17 06:03:17.537075 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537086 | orchestrator | 2026-02-17 06:03:17.537097 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-17 06:03:17.537107 | orchestrator | Tuesday 17 February 2026 06:02:38 +0000 (0:00:00.917) 0:15:53.519 ****** 2026-02-17 06:03:17.537117 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537127 | orchestrator | 2026-02-17 06:03:17.537137 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-17 06:03:17.537147 | orchestrator | Tuesday 17 February 2026 06:02:39 +0000 (0:00:00.920) 0:15:54.439 ****** 2026-02-17 06:03:17.537157 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537167 | orchestrator | 2026-02-17 06:03:17.537177 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-17 06:03:17.537186 | orchestrator | Tuesday 17 February 2026 06:02:39 +0000 (0:00:00.810) 0:15:55.250 ****** 2026-02-17 06:03:17.537196 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537206 | orchestrator | 2026-02-17 06:03:17.537216 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-17 06:03:17.537226 | orchestrator | Tuesday 17 February 2026 06:02:40 +0000 (0:00:00.896) 0:15:56.147 ****** 2026-02-17 06:03:17.537236 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537246 | orchestrator | 2026-02-17 06:03:17.537256 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-17 06:03:17.537267 | orchestrator | Tuesday 17 February 2026 06:02:41 +0000 (0:00:00.795) 0:15:56.942 ****** 2026-02-17 06:03:17.537276 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 06:03:17.537287 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 06:03:17.537297 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 06:03:17.537306 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537316 | orchestrator | 2026-02-17 06:03:17.537326 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-17 06:03:17.537359 | orchestrator | Tuesday 17 February 2026 06:02:42 +0000 (0:00:01.078) 0:15:58.021 ****** 2026-02-17 06:03:17.537369 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-17 06:03:17.537414 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-17 06:03:17.537427 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-17 06:03:17.537438 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-17 06:03:17.537449 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-17 06:03:17.537461 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-17 06:03:17.537472 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537483 | orchestrator | 2026-02-17 06:03:17.537494 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-17 06:03:17.537506 | orchestrator | Tuesday 17 February 2026 06:02:44 +0000 (0:00:01.761) 0:15:59.783 ****** 2026-02-17 06:03:17.537518 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:03:17.537529 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:03:17.537540 | orchestrator | 2026-02-17 06:03:17.537552 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-17 06:03:17.537563 | orchestrator | Tuesday 17 February 2026 06:02:47 +0000 (0:00:03.222) 0:16:03.005 ****** 2026-02-17 06:03:17.537574 | orchestrator | changed: [testbed-node-2] 2026-02-17 06:03:17.537586 | orchestrator | 2026-02-17 06:03:17.537596 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:03:17.537608 | orchestrator | Tuesday 17 February 2026 06:02:49 +0000 (0:00:02.115) 0:16:05.121 ****** 2026-02-17 06:03:17.537619 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-17 06:03:17.537631 | orchestrator | 2026-02-17 06:03:17.537642 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:03:17.537653 | orchestrator | Tuesday 17 February 2026 06:02:51 +0000 (0:00:01.262) 0:16:06.383 ****** 2026-02-17 06:03:17.537664 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-17 06:03:17.537676 | orchestrator | 2026-02-17 06:03:17.537688 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:03:17.537699 | orchestrator | Tuesday 17 February 2026 06:02:52 +0000 (0:00:01.193) 0:16:07.577 ****** 2026-02-17 06:03:17.537710 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:03:17.537721 | orchestrator | 2026-02-17 06:03:17.537732 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:03:17.537744 | orchestrator | Tuesday 17 February 2026 06:02:53 +0000 (0:00:01.588) 0:16:09.165 ****** 2026-02-17 06:03:17.537755 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537766 | orchestrator | 2026-02-17 06:03:17.537777 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:03:17.537789 | orchestrator | Tuesday 17 February 2026 06:02:55 +0000 (0:00:01.147) 0:16:10.312 ****** 2026-02-17 06:03:17.537800 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537810 | orchestrator | 2026-02-17 06:03:17.537820 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:03:17.537845 | orchestrator | Tuesday 17 February 2026 06:02:56 +0000 (0:00:01.208) 0:16:11.521 ****** 2026-02-17 06:03:17.537856 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537866 | orchestrator | 2026-02-17 06:03:17.537875 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:03:17.537885 | orchestrator | Tuesday 17 February 2026 06:02:57 +0000 (0:00:01.124) 0:16:12.646 ****** 2026-02-17 06:03:17.537895 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:03:17.537905 | orchestrator | 2026-02-17 06:03:17.537915 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:03:17.537932 | orchestrator | Tuesday 17 February 2026 06:02:59 +0000 (0:00:02.505) 0:16:15.152 ****** 2026-02-17 06:03:17.537942 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537952 | orchestrator | 2026-02-17 06:03:17.537962 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:03:17.537971 | orchestrator | Tuesday 17 February 2026 06:03:01 +0000 (0:00:01.137) 0:16:16.289 ****** 2026-02-17 06:03:17.537981 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.537990 | orchestrator | 2026-02-17 06:03:17.538103 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:03:17.538118 | orchestrator | Tuesday 17 February 2026 06:03:02 +0000 (0:00:01.222) 0:16:17.511 ****** 2026-02-17 06:03:17.538128 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:03:17.538137 | orchestrator | 2026-02-17 06:03:17.538147 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:03:17.538157 | orchestrator | Tuesday 17 February 2026 06:03:03 +0000 (0:00:01.577) 0:16:19.089 ****** 2026-02-17 06:03:17.538166 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:03:17.538176 | orchestrator | 2026-02-17 06:03:17.538186 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:03:17.538195 | orchestrator | Tuesday 17 February 2026 06:03:05 +0000 (0:00:01.586) 0:16:20.676 ****** 2026-02-17 06:03:17.538205 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538214 | orchestrator | 2026-02-17 06:03:17.538224 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:03:17.538234 | orchestrator | Tuesday 17 February 2026 06:03:06 +0000 (0:00:00.783) 0:16:21.459 ****** 2026-02-17 06:03:17.538244 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:03:17.538253 | orchestrator | 2026-02-17 06:03:17.538263 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:03:17.538273 | orchestrator | Tuesday 17 February 2026 06:03:07 +0000 (0:00:00.813) 0:16:22.273 ****** 2026-02-17 06:03:17.538282 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538292 | orchestrator | 2026-02-17 06:03:17.538302 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:03:17.538311 | orchestrator | Tuesday 17 February 2026 06:03:07 +0000 (0:00:00.769) 0:16:23.043 ****** 2026-02-17 06:03:17.538321 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538331 | orchestrator | 2026-02-17 06:03:17.538347 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:03:17.538358 | orchestrator | Tuesday 17 February 2026 06:03:08 +0000 (0:00:00.856) 0:16:23.899 ****** 2026-02-17 06:03:17.538367 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538377 | orchestrator | 2026-02-17 06:03:17.538387 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:03:17.538397 | orchestrator | Tuesday 17 February 2026 06:03:09 +0000 (0:00:00.787) 0:16:24.687 ****** 2026-02-17 06:03:17.538406 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538416 | orchestrator | 2026-02-17 06:03:17.538426 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:03:17.538436 | orchestrator | Tuesday 17 February 2026 06:03:10 +0000 (0:00:00.813) 0:16:25.500 ****** 2026-02-17 06:03:17.538445 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538455 | orchestrator | 2026-02-17 06:03:17.538465 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:03:17.538474 | orchestrator | Tuesday 17 February 2026 06:03:11 +0000 (0:00:00.846) 0:16:26.347 ****** 2026-02-17 06:03:17.538484 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:03:17.538494 | orchestrator | 2026-02-17 06:03:17.538504 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:03:17.538513 | orchestrator | Tuesday 17 February 2026 06:03:11 +0000 (0:00:00.826) 0:16:27.174 ****** 2026-02-17 06:03:17.538523 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:03:17.538533 | orchestrator | 2026-02-17 06:03:17.538542 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:03:17.538559 | orchestrator | Tuesday 17 February 2026 06:03:12 +0000 (0:00:00.863) 0:16:28.037 ****** 2026-02-17 06:03:17.538569 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:03:17.538579 | orchestrator | 2026-02-17 06:03:17.538589 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:03:17.538598 | orchestrator | Tuesday 17 February 2026 06:03:13 +0000 (0:00:00.850) 0:16:28.888 ****** 2026-02-17 06:03:17.538608 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538618 | orchestrator | 2026-02-17 06:03:17.538628 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:03:17.538637 | orchestrator | Tuesday 17 February 2026 06:03:14 +0000 (0:00:00.778) 0:16:29.667 ****** 2026-02-17 06:03:17.538647 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538657 | orchestrator | 2026-02-17 06:03:17.538666 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:03:17.538676 | orchestrator | Tuesday 17 February 2026 06:03:15 +0000 (0:00:00.782) 0:16:30.450 ****** 2026-02-17 06:03:17.538686 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538695 | orchestrator | 2026-02-17 06:03:17.538705 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:03:17.538715 | orchestrator | Tuesday 17 February 2026 06:03:15 +0000 (0:00:00.761) 0:16:31.211 ****** 2026-02-17 06:03:17.538725 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538734 | orchestrator | 2026-02-17 06:03:17.538744 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:03:17.538754 | orchestrator | Tuesday 17 February 2026 06:03:16 +0000 (0:00:00.806) 0:16:32.018 ****** 2026-02-17 06:03:17.538764 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:03:17.538773 | orchestrator | 2026-02-17 06:03:17.538790 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:04:01.672664 | orchestrator | Tuesday 17 February 2026 06:03:17 +0000 (0:00:00.776) 0:16:32.795 ****** 2026-02-17 06:04:01.672770 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.672785 | orchestrator | 2026-02-17 06:04:01.672797 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:04:01.672807 | orchestrator | Tuesday 17 February 2026 06:03:18 +0000 (0:00:00.798) 0:16:33.593 ****** 2026-02-17 06:04:01.672818 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.672828 | orchestrator | 2026-02-17 06:04:01.672839 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:04:01.672850 | orchestrator | Tuesday 17 February 2026 06:03:19 +0000 (0:00:00.798) 0:16:34.392 ****** 2026-02-17 06:04:01.672860 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.672870 | orchestrator | 2026-02-17 06:04:01.672880 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:04:01.672890 | orchestrator | Tuesday 17 February 2026 06:03:19 +0000 (0:00:00.779) 0:16:35.172 ****** 2026-02-17 06:04:01.672899 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.672909 | orchestrator | 2026-02-17 06:04:01.672919 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:04:01.672929 | orchestrator | Tuesday 17 February 2026 06:03:20 +0000 (0:00:00.789) 0:16:35.962 ****** 2026-02-17 06:04:01.672939 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.672949 | orchestrator | 2026-02-17 06:04:01.672959 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:04:01.672969 | orchestrator | Tuesday 17 February 2026 06:03:21 +0000 (0:00:00.802) 0:16:36.765 ****** 2026-02-17 06:04:01.672978 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.672988 | orchestrator | 2026-02-17 06:04:01.672998 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:04:01.673008 | orchestrator | Tuesday 17 February 2026 06:03:22 +0000 (0:00:00.762) 0:16:37.527 ****** 2026-02-17 06:04:01.673018 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673028 | orchestrator | 2026-02-17 06:04:01.673038 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:04:01.673098 | orchestrator | Tuesday 17 February 2026 06:03:23 +0000 (0:00:00.776) 0:16:38.304 ****** 2026-02-17 06:04:01.673109 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:01.673120 | orchestrator | 2026-02-17 06:04:01.673130 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:04:01.673140 | orchestrator | Tuesday 17 February 2026 06:03:24 +0000 (0:00:01.600) 0:16:39.905 ****** 2026-02-17 06:04:01.673150 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:01.673160 | orchestrator | 2026-02-17 06:04:01.673170 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:04:01.673194 | orchestrator | Tuesday 17 February 2026 06:03:26 +0000 (0:00:02.069) 0:16:41.975 ****** 2026-02-17 06:04:01.673204 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-17 06:04:01.673218 | orchestrator | 2026-02-17 06:04:01.673230 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:04:01.673242 | orchestrator | Tuesday 17 February 2026 06:03:27 +0000 (0:00:01.255) 0:16:43.230 ****** 2026-02-17 06:04:01.673253 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673265 | orchestrator | 2026-02-17 06:04:01.673276 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:04:01.673286 | orchestrator | Tuesday 17 February 2026 06:03:29 +0000 (0:00:01.144) 0:16:44.374 ****** 2026-02-17 06:04:01.673296 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673306 | orchestrator | 2026-02-17 06:04:01.673316 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:04:01.673325 | orchestrator | Tuesday 17 February 2026 06:03:30 +0000 (0:00:01.144) 0:16:45.519 ****** 2026-02-17 06:04:01.673335 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:04:01.673345 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:04:01.673355 | orchestrator | 2026-02-17 06:04:01.673365 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:04:01.673375 | orchestrator | Tuesday 17 February 2026 06:03:32 +0000 (0:00:01.824) 0:16:47.344 ****** 2026-02-17 06:04:01.673384 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:01.673394 | orchestrator | 2026-02-17 06:04:01.673404 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:04:01.673414 | orchestrator | Tuesday 17 February 2026 06:03:33 +0000 (0:00:01.479) 0:16:48.823 ****** 2026-02-17 06:04:01.673424 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673434 | orchestrator | 2026-02-17 06:04:01.673443 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:04:01.673453 | orchestrator | Tuesday 17 February 2026 06:03:34 +0000 (0:00:01.160) 0:16:49.984 ****** 2026-02-17 06:04:01.673463 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673473 | orchestrator | 2026-02-17 06:04:01.673483 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:04:01.673493 | orchestrator | Tuesday 17 February 2026 06:03:35 +0000 (0:00:00.855) 0:16:50.840 ****** 2026-02-17 06:04:01.673503 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673513 | orchestrator | 2026-02-17 06:04:01.673523 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:04:01.673533 | orchestrator | Tuesday 17 February 2026 06:03:36 +0000 (0:00:00.781) 0:16:51.621 ****** 2026-02-17 06:04:01.673542 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-17 06:04:01.673552 | orchestrator | 2026-02-17 06:04:01.673562 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:04:01.673572 | orchestrator | Tuesday 17 February 2026 06:03:37 +0000 (0:00:01.120) 0:16:52.742 ****** 2026-02-17 06:04:01.673582 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:01.673591 | orchestrator | 2026-02-17 06:04:01.673601 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:04:01.673634 | orchestrator | Tuesday 17 February 2026 06:03:39 +0000 (0:00:01.765) 0:16:54.508 ****** 2026-02-17 06:04:01.673645 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:04:01.673654 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:04:01.673664 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:04:01.673674 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673684 | orchestrator | 2026-02-17 06:04:01.673693 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:04:01.673703 | orchestrator | Tuesday 17 February 2026 06:03:40 +0000 (0:00:01.168) 0:16:55.676 ****** 2026-02-17 06:04:01.673713 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673723 | orchestrator | 2026-02-17 06:04:01.673733 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:04:01.673742 | orchestrator | Tuesday 17 February 2026 06:03:41 +0000 (0:00:01.108) 0:16:56.785 ****** 2026-02-17 06:04:01.673752 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673762 | orchestrator | 2026-02-17 06:04:01.673772 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:04:01.673782 | orchestrator | Tuesday 17 February 2026 06:03:42 +0000 (0:00:01.268) 0:16:58.053 ****** 2026-02-17 06:04:01.673792 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673801 | orchestrator | 2026-02-17 06:04:01.673811 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:04:01.673821 | orchestrator | Tuesday 17 February 2026 06:03:43 +0000 (0:00:01.175) 0:16:59.229 ****** 2026-02-17 06:04:01.673831 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673841 | orchestrator | 2026-02-17 06:04:01.673851 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:04:01.673861 | orchestrator | Tuesday 17 February 2026 06:03:45 +0000 (0:00:01.207) 0:17:00.436 ****** 2026-02-17 06:04:01.673870 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.673880 | orchestrator | 2026-02-17 06:04:01.673890 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:04:01.673900 | orchestrator | Tuesday 17 February 2026 06:03:46 +0000 (0:00:00.849) 0:17:01.285 ****** 2026-02-17 06:04:01.673910 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:01.673920 | orchestrator | 2026-02-17 06:04:01.673929 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:04:01.673939 | orchestrator | Tuesday 17 February 2026 06:03:48 +0000 (0:00:02.236) 0:17:03.521 ****** 2026-02-17 06:04:01.673949 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:01.673959 | orchestrator | 2026-02-17 06:04:01.673969 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:04:01.673984 | orchestrator | Tuesday 17 February 2026 06:03:49 +0000 (0:00:00.870) 0:17:04.392 ****** 2026-02-17 06:04:01.673994 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-17 06:04:01.674004 | orchestrator | 2026-02-17 06:04:01.674081 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:04:01.674096 | orchestrator | Tuesday 17 February 2026 06:03:50 +0000 (0:00:01.146) 0:17:05.538 ****** 2026-02-17 06:04:01.674106 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.674116 | orchestrator | 2026-02-17 06:04:01.674126 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:04:01.674135 | orchestrator | Tuesday 17 February 2026 06:03:51 +0000 (0:00:01.127) 0:17:06.666 ****** 2026-02-17 06:04:01.674145 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.674155 | orchestrator | 2026-02-17 06:04:01.674165 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:04:01.674175 | orchestrator | Tuesday 17 February 2026 06:03:52 +0000 (0:00:01.214) 0:17:07.880 ****** 2026-02-17 06:04:01.674185 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.674194 | orchestrator | 2026-02-17 06:04:01.674211 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:04:01.674221 | orchestrator | Tuesday 17 February 2026 06:03:53 +0000 (0:00:01.191) 0:17:09.072 ****** 2026-02-17 06:04:01.674231 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.674241 | orchestrator | 2026-02-17 06:04:01.674251 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:04:01.674260 | orchestrator | Tuesday 17 February 2026 06:03:54 +0000 (0:00:01.164) 0:17:10.237 ****** 2026-02-17 06:04:01.674270 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.674280 | orchestrator | 2026-02-17 06:04:01.674290 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:04:01.674300 | orchestrator | Tuesday 17 February 2026 06:03:56 +0000 (0:00:01.221) 0:17:11.458 ****** 2026-02-17 06:04:01.674309 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.674319 | orchestrator | 2026-02-17 06:04:01.674329 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:04:01.674339 | orchestrator | Tuesday 17 February 2026 06:03:57 +0000 (0:00:01.177) 0:17:12.636 ****** 2026-02-17 06:04:01.674348 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.674358 | orchestrator | 2026-02-17 06:04:01.674368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:04:01.674378 | orchestrator | Tuesday 17 February 2026 06:03:58 +0000 (0:00:01.163) 0:17:13.799 ****** 2026-02-17 06:04:01.674388 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:01.674398 | orchestrator | 2026-02-17 06:04:01.674408 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:04:01.674417 | orchestrator | Tuesday 17 February 2026 06:03:59 +0000 (0:00:01.186) 0:17:14.985 ****** 2026-02-17 06:04:01.674427 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:01.674437 | orchestrator | 2026-02-17 06:04:01.674447 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:04:01.674457 | orchestrator | Tuesday 17 February 2026 06:04:00 +0000 (0:00:00.813) 0:17:15.799 ****** 2026-02-17 06:04:01.674467 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-17 06:04:01.674477 | orchestrator | 2026-02-17 06:04:01.674493 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:04:38.137029 | orchestrator | Tuesday 17 February 2026 06:04:01 +0000 (0:00:01.131) 0:17:16.931 ****** 2026-02-17 06:04:38.137194 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-17 06:04:38.137215 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-17 06:04:38.137227 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-17 06:04:38.137243 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-17 06:04:38.137263 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-17 06:04:38.137281 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-17 06:04:38.137299 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-17 06:04:38.137318 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:04:38.137338 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:04:38.137358 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:04:38.137380 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:04:38.137400 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:04:38.137413 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:04:38.137424 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:04:38.137435 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-17 06:04:38.137447 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-17 06:04:38.137458 | orchestrator | 2026-02-17 06:04:38.137470 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:04:38.137507 | orchestrator | Tuesday 17 February 2026 06:04:08 +0000 (0:00:06.355) 0:17:23.287 ****** 2026-02-17 06:04:38.137519 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.137533 | orchestrator | 2026-02-17 06:04:38.137546 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:04:38.137559 | orchestrator | Tuesday 17 February 2026 06:04:08 +0000 (0:00:00.791) 0:17:24.078 ****** 2026-02-17 06:04:38.137571 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.137584 | orchestrator | 2026-02-17 06:04:38.137597 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:04:38.137610 | orchestrator | Tuesday 17 February 2026 06:04:09 +0000 (0:00:00.762) 0:17:24.841 ****** 2026-02-17 06:04:38.137625 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.137644 | orchestrator | 2026-02-17 06:04:38.137666 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:04:38.137703 | orchestrator | Tuesday 17 February 2026 06:04:10 +0000 (0:00:00.878) 0:17:25.719 ****** 2026-02-17 06:04:38.137723 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.137743 | orchestrator | 2026-02-17 06:04:38.137763 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:04:38.137783 | orchestrator | Tuesday 17 February 2026 06:04:11 +0000 (0:00:00.812) 0:17:26.532 ****** 2026-02-17 06:04:38.137802 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.137815 | orchestrator | 2026-02-17 06:04:38.137859 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:04:38.137872 | orchestrator | Tuesday 17 February 2026 06:04:12 +0000 (0:00:00.810) 0:17:27.342 ****** 2026-02-17 06:04:38.137885 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.137899 | orchestrator | 2026-02-17 06:04:38.137910 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:04:38.137923 | orchestrator | Tuesday 17 February 2026 06:04:12 +0000 (0:00:00.793) 0:17:28.136 ****** 2026-02-17 06:04:38.137934 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.137945 | orchestrator | 2026-02-17 06:04:38.137956 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:04:38.137968 | orchestrator | Tuesday 17 February 2026 06:04:13 +0000 (0:00:00.806) 0:17:28.943 ****** 2026-02-17 06:04:38.137979 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.137990 | orchestrator | 2026-02-17 06:04:38.138001 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:04:38.138110 | orchestrator | Tuesday 17 February 2026 06:04:14 +0000 (0:00:00.798) 0:17:29.742 ****** 2026-02-17 06:04:38.138139 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138158 | orchestrator | 2026-02-17 06:04:38.138177 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:04:38.138188 | orchestrator | Tuesday 17 February 2026 06:04:15 +0000 (0:00:00.791) 0:17:30.533 ****** 2026-02-17 06:04:38.138199 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138210 | orchestrator | 2026-02-17 06:04:38.138221 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:04:38.138233 | orchestrator | Tuesday 17 February 2026 06:04:16 +0000 (0:00:00.766) 0:17:31.300 ****** 2026-02-17 06:04:38.138244 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138255 | orchestrator | 2026-02-17 06:04:38.138266 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:04:38.138277 | orchestrator | Tuesday 17 February 2026 06:04:16 +0000 (0:00:00.783) 0:17:32.084 ****** 2026-02-17 06:04:38.138288 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138299 | orchestrator | 2026-02-17 06:04:38.138310 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:04:38.138321 | orchestrator | Tuesday 17 February 2026 06:04:17 +0000 (0:00:00.784) 0:17:32.869 ****** 2026-02-17 06:04:38.138332 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138356 | orchestrator | 2026-02-17 06:04:38.138367 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:04:38.138384 | orchestrator | Tuesday 17 February 2026 06:04:18 +0000 (0:00:00.903) 0:17:33.772 ****** 2026-02-17 06:04:38.138402 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138420 | orchestrator | 2026-02-17 06:04:38.138439 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:04:38.138483 | orchestrator | Tuesday 17 February 2026 06:04:19 +0000 (0:00:00.767) 0:17:34.539 ****** 2026-02-17 06:04:38.138503 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138595 | orchestrator | 2026-02-17 06:04:38.138610 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:04:38.138621 | orchestrator | Tuesday 17 February 2026 06:04:20 +0000 (0:00:00.862) 0:17:35.402 ****** 2026-02-17 06:04:38.138650 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138661 | orchestrator | 2026-02-17 06:04:38.138672 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:04:38.138683 | orchestrator | Tuesday 17 February 2026 06:04:20 +0000 (0:00:00.846) 0:17:36.249 ****** 2026-02-17 06:04:38.138695 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138706 | orchestrator | 2026-02-17 06:04:38.138717 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:04:38.138730 | orchestrator | Tuesday 17 February 2026 06:04:21 +0000 (0:00:00.785) 0:17:37.035 ****** 2026-02-17 06:04:38.138741 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138760 | orchestrator | 2026-02-17 06:04:38.138781 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:04:38.138799 | orchestrator | Tuesday 17 February 2026 06:04:22 +0000 (0:00:00.796) 0:17:37.831 ****** 2026-02-17 06:04:38.138820 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138839 | orchestrator | 2026-02-17 06:04:38.138881 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:04:38.138900 | orchestrator | Tuesday 17 February 2026 06:04:23 +0000 (0:00:00.807) 0:17:38.639 ****** 2026-02-17 06:04:38.138912 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138922 | orchestrator | 2026-02-17 06:04:38.138933 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:04:38.138944 | orchestrator | Tuesday 17 February 2026 06:04:24 +0000 (0:00:00.846) 0:17:39.485 ****** 2026-02-17 06:04:38.138955 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.138966 | orchestrator | 2026-02-17 06:04:38.138977 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:04:38.138988 | orchestrator | Tuesday 17 February 2026 06:04:25 +0000 (0:00:00.847) 0:17:40.333 ****** 2026-02-17 06:04:38.138999 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:04:38.139010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:04:38.139021 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:04:38.139032 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.139043 | orchestrator | 2026-02-17 06:04:38.139063 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:04:38.139075 | orchestrator | Tuesday 17 February 2026 06:04:26 +0000 (0:00:01.072) 0:17:41.406 ****** 2026-02-17 06:04:38.139111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:04:38.139128 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:04:38.139146 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:04:38.139164 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.139182 | orchestrator | 2026-02-17 06:04:38.139199 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:04:38.139217 | orchestrator | Tuesday 17 February 2026 06:04:27 +0000 (0:00:01.139) 0:17:42.545 ****** 2026-02-17 06:04:38.139234 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:04:38.139265 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:04:38.139283 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:04:38.139301 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.139320 | orchestrator | 2026-02-17 06:04:38.139338 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:04:38.139357 | orchestrator | Tuesday 17 February 2026 06:04:28 +0000 (0:00:01.037) 0:17:43.583 ****** 2026-02-17 06:04:38.139375 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.139390 | orchestrator | 2026-02-17 06:04:38.139401 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:04:38.139412 | orchestrator | Tuesday 17 February 2026 06:04:29 +0000 (0:00:00.806) 0:17:44.389 ****** 2026-02-17 06:04:38.139424 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-17 06:04:38.139435 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.139446 | orchestrator | 2026-02-17 06:04:38.139456 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:04:38.139467 | orchestrator | Tuesday 17 February 2026 06:04:30 +0000 (0:00:00.897) 0:17:45.287 ****** 2026-02-17 06:04:38.139478 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:38.139490 | orchestrator | 2026-02-17 06:04:38.139501 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:04:38.139512 | orchestrator | Tuesday 17 February 2026 06:04:31 +0000 (0:00:01.435) 0:17:46.722 ****** 2026-02-17 06:04:38.139523 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:38.139534 | orchestrator | 2026-02-17 06:04:38.139545 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-17 06:04:38.139556 | orchestrator | Tuesday 17 February 2026 06:04:32 +0000 (0:00:00.814) 0:17:47.537 ****** 2026-02-17 06:04:38.139567 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-17 06:04:38.139579 | orchestrator | 2026-02-17 06:04:38.139590 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-17 06:04:38.139601 | orchestrator | Tuesday 17 February 2026 06:04:33 +0000 (0:00:01.191) 0:17:48.729 ****** 2026-02-17 06:04:38.139619 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:04:38.139638 | orchestrator | 2026-02-17 06:04:38.139656 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-17 06:04:38.139674 | orchestrator | Tuesday 17 February 2026 06:04:36 +0000 (0:00:03.449) 0:17:52.179 ****** 2026-02-17 06:04:38.139693 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:04:38.139714 | orchestrator | 2026-02-17 06:04:38.139748 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-17 06:05:53.601053 | orchestrator | Tuesday 17 February 2026 06:04:38 +0000 (0:00:01.211) 0:17:53.390 ****** 2026-02-17 06:05:53.601210 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601228 | orchestrator | 2026-02-17 06:05:53.601240 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-17 06:05:53.601252 | orchestrator | Tuesday 17 February 2026 06:04:39 +0000 (0:00:01.133) 0:17:54.524 ****** 2026-02-17 06:05:53.601264 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601275 | orchestrator | 2026-02-17 06:05:53.601286 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-17 06:05:53.601297 | orchestrator | Tuesday 17 February 2026 06:04:40 +0000 (0:00:01.215) 0:17:55.740 ****** 2026-02-17 06:05:53.601309 | orchestrator | changed: [testbed-node-2] 2026-02-17 06:05:53.601320 | orchestrator | 2026-02-17 06:05:53.601332 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-17 06:05:53.601343 | orchestrator | Tuesday 17 February 2026 06:04:42 +0000 (0:00:02.002) 0:17:57.742 ****** 2026-02-17 06:05:53.601354 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601365 | orchestrator | 2026-02-17 06:05:53.601376 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-17 06:05:53.601387 | orchestrator | Tuesday 17 February 2026 06:04:44 +0000 (0:00:01.627) 0:17:59.370 ****** 2026-02-17 06:05:53.601424 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601436 | orchestrator | 2026-02-17 06:05:53.601447 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-17 06:05:53.601458 | orchestrator | Tuesday 17 February 2026 06:04:45 +0000 (0:00:01.599) 0:18:00.969 ****** 2026-02-17 06:05:53.601469 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601479 | orchestrator | 2026-02-17 06:05:53.601490 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-17 06:05:53.601501 | orchestrator | Tuesday 17 February 2026 06:04:47 +0000 (0:00:01.441) 0:18:02.410 ****** 2026-02-17 06:05:53.601512 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:05:53.601523 | orchestrator | 2026-02-17 06:05:53.601534 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-17 06:05:53.601545 | orchestrator | Tuesday 17 February 2026 06:04:48 +0000 (0:00:01.544) 0:18:03.955 ****** 2026-02-17 06:05:53.601556 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:05:53.601567 | orchestrator | 2026-02-17 06:05:53.601578 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-17 06:05:53.601592 | orchestrator | Tuesday 17 February 2026 06:04:50 +0000 (0:00:01.548) 0:18:05.504 ****** 2026-02-17 06:05:53.601605 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:05:53.601634 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-17 06:05:53.601647 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-17 06:05:53.601659 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-17 06:05:53.601673 | orchestrator | 2026-02-17 06:05:53.601685 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-17 06:05:53.601697 | orchestrator | Tuesday 17 February 2026 06:04:54 +0000 (0:00:03.864) 0:18:09.369 ****** 2026-02-17 06:05:53.601710 | orchestrator | changed: [testbed-node-2] 2026-02-17 06:05:53.601723 | orchestrator | 2026-02-17 06:05:53.601735 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-17 06:05:53.601748 | orchestrator | Tuesday 17 February 2026 06:04:56 +0000 (0:00:02.023) 0:18:11.392 ****** 2026-02-17 06:05:53.601761 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601774 | orchestrator | 2026-02-17 06:05:53.601787 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-17 06:05:53.601799 | orchestrator | Tuesday 17 February 2026 06:04:57 +0000 (0:00:01.172) 0:18:12.565 ****** 2026-02-17 06:05:53.601812 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601824 | orchestrator | 2026-02-17 06:05:53.601837 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-17 06:05:53.601850 | orchestrator | Tuesday 17 February 2026 06:04:58 +0000 (0:00:01.194) 0:18:13.759 ****** 2026-02-17 06:05:53.601861 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601874 | orchestrator | 2026-02-17 06:05:53.601887 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-17 06:05:53.601899 | orchestrator | Tuesday 17 February 2026 06:05:00 +0000 (0:00:01.840) 0:18:15.600 ****** 2026-02-17 06:05:53.601912 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.601925 | orchestrator | 2026-02-17 06:05:53.601938 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-17 06:05:53.601950 | orchestrator | Tuesday 17 February 2026 06:05:01 +0000 (0:00:01.529) 0:18:17.130 ****** 2026-02-17 06:05:53.601961 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:05:53.601972 | orchestrator | 2026-02-17 06:05:53.601983 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-17 06:05:53.601993 | orchestrator | Tuesday 17 February 2026 06:05:02 +0000 (0:00:00.776) 0:18:17.906 ****** 2026-02-17 06:05:53.602005 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-17 06:05:53.602070 | orchestrator | 2026-02-17 06:05:53.602083 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-17 06:05:53.602104 | orchestrator | Tuesday 17 February 2026 06:05:03 +0000 (0:00:01.218) 0:18:19.125 ****** 2026-02-17 06:05:53.602115 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:05:53.602126 | orchestrator | 2026-02-17 06:05:53.602137 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-17 06:05:53.602148 | orchestrator | Tuesday 17 February 2026 06:05:04 +0000 (0:00:01.125) 0:18:20.250 ****** 2026-02-17 06:05:53.602179 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:05:53.602190 | orchestrator | 2026-02-17 06:05:53.602201 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-17 06:05:53.602212 | orchestrator | Tuesday 17 February 2026 06:05:06 +0000 (0:00:01.119) 0:18:21.370 ****** 2026-02-17 06:05:53.602223 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-17 06:05:53.602234 | orchestrator | 2026-02-17 06:05:53.602263 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-17 06:05:53.602275 | orchestrator | Tuesday 17 February 2026 06:05:07 +0000 (0:00:01.161) 0:18:22.532 ****** 2026-02-17 06:05:53.602286 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.602297 | orchestrator | 2026-02-17 06:05:53.602308 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-17 06:05:53.602319 | orchestrator | Tuesday 17 February 2026 06:05:09 +0000 (0:00:02.252) 0:18:24.784 ****** 2026-02-17 06:05:53.602330 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.602341 | orchestrator | 2026-02-17 06:05:53.602352 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-17 06:05:53.602363 | orchestrator | Tuesday 17 February 2026 06:05:11 +0000 (0:00:01.945) 0:18:26.730 ****** 2026-02-17 06:05:53.602374 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.602385 | orchestrator | 2026-02-17 06:05:53.602396 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-17 06:05:53.602407 | orchestrator | Tuesday 17 February 2026 06:05:13 +0000 (0:00:02.412) 0:18:29.142 ****** 2026-02-17 06:05:53.602418 | orchestrator | changed: [testbed-node-2] 2026-02-17 06:05:53.602429 | orchestrator | 2026-02-17 06:05:53.602440 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-17 06:05:53.602451 | orchestrator | Tuesday 17 February 2026 06:05:16 +0000 (0:00:02.909) 0:18:32.051 ****** 2026-02-17 06:05:53.602462 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-17 06:05:53.602472 | orchestrator | 2026-02-17 06:05:53.602483 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-17 06:05:53.602494 | orchestrator | Tuesday 17 February 2026 06:05:17 +0000 (0:00:01.134) 0:18:33.186 ****** 2026-02-17 06:05:53.602505 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-17 06:05:53.602517 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.602528 | orchestrator | 2026-02-17 06:05:53.602539 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-17 06:05:53.602550 | orchestrator | Tuesday 17 February 2026 06:05:40 +0000 (0:00:22.996) 0:18:56.182 ****** 2026-02-17 06:05:53.602561 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:05:53.602572 | orchestrator | 2026-02-17 06:05:53.602583 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-17 06:05:53.602594 | orchestrator | Tuesday 17 February 2026 06:05:43 +0000 (0:00:02.615) 0:18:58.797 ****** 2026-02-17 06:05:53.602605 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:05:53.602615 | orchestrator | 2026-02-17 06:05:53.602627 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-17 06:05:53.602644 | orchestrator | Tuesday 17 February 2026 06:05:44 +0000 (0:00:00.780) 0:18:59.578 ****** 2026-02-17 06:05:53.602658 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-17 06:05:53.602679 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-17 06:05:53.602690 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-17 06:05:53.602702 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-17 06:05:53.602715 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-17 06:05:53.602734 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__94d008519633750d833c4c909a3951e373d3e97e'}])  2026-02-17 06:06:40.639393 | orchestrator | 2026-02-17 06:06:40.639520 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-17 06:06:40.639538 | orchestrator | Tuesday 17 February 2026 06:05:53 +0000 (0:00:09.277) 0:19:08.856 ****** 2026-02-17 06:06:40.639551 | orchestrator | changed: [testbed-node-2] 2026-02-17 06:06:40.639563 | orchestrator | 2026-02-17 06:06:40.639574 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:06:40.639586 | orchestrator | Tuesday 17 February 2026 06:05:55 +0000 (0:00:02.151) 0:19:11.007 ****** 2026-02-17 06:06:40.639597 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:06:40.639609 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-17 06:06:40.639620 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-17 06:06:40.639631 | orchestrator | 2026-02-17 06:06:40.639641 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:06:40.639658 | orchestrator | Tuesday 17 February 2026 06:05:57 +0000 (0:00:01.874) 0:19:12.882 ****** 2026-02-17 06:06:40.639675 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 06:06:40.639687 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 06:06:40.639699 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 06:06:40.639709 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:06:40.639720 | orchestrator | 2026-02-17 06:06:40.639731 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-17 06:06:40.639742 | orchestrator | Tuesday 17 February 2026 06:05:58 +0000 (0:00:01.071) 0:19:13.953 ****** 2026-02-17 06:06:40.639753 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:06:40.639765 | orchestrator | 2026-02-17 06:06:40.639776 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-17 06:06:40.639811 | orchestrator | Tuesday 17 February 2026 06:05:59 +0000 (0:00:00.816) 0:19:14.770 ****** 2026-02-17 06:06:40.639823 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:06:40.639835 | orchestrator | 2026-02-17 06:06:40.639846 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-17 06:06:40.639857 | orchestrator | 2026-02-17 06:06:40.639868 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-17 06:06:40.639880 | orchestrator | Tuesday 17 February 2026 06:06:02 +0000 (0:00:03.341) 0:19:18.111 ****** 2026-02-17 06:06:40.639891 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:06:40.639917 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:06:40.639930 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:06:40.639943 | orchestrator | 2026-02-17 06:06:40.639955 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-17 06:06:40.639968 | orchestrator | 2026-02-17 06:06:40.639981 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-17 06:06:40.639994 | orchestrator | Tuesday 17 February 2026 06:06:04 +0000 (0:00:01.786) 0:19:19.898 ****** 2026-02-17 06:06:40.640007 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640020 | orchestrator | 2026-02-17 06:06:40.640033 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:06:40.640045 | orchestrator | Tuesday 17 February 2026 06:06:05 +0000 (0:00:01.148) 0:19:21.046 ****** 2026-02-17 06:06:40.640058 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640071 | orchestrator | 2026-02-17 06:06:40.640083 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:06:40.640096 | orchestrator | Tuesday 17 February 2026 06:06:06 +0000 (0:00:01.189) 0:19:22.236 ****** 2026-02-17 06:06:40.640109 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640122 | orchestrator | 2026-02-17 06:06:40.640135 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:06:40.640148 | orchestrator | Tuesday 17 February 2026 06:06:08 +0000 (0:00:01.152) 0:19:23.389 ****** 2026-02-17 06:06:40.640161 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640173 | orchestrator | 2026-02-17 06:06:40.640185 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:06:40.640229 | orchestrator | Tuesday 17 February 2026 06:06:09 +0000 (0:00:01.243) 0:19:24.632 ****** 2026-02-17 06:06:40.640249 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640269 | orchestrator | 2026-02-17 06:06:40.640288 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:06:40.640306 | orchestrator | Tuesday 17 February 2026 06:06:10 +0000 (0:00:01.138) 0:19:25.770 ****** 2026-02-17 06:06:40.640318 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640328 | orchestrator | 2026-02-17 06:06:40.640340 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:06:40.640351 | orchestrator | Tuesday 17 February 2026 06:06:11 +0000 (0:00:01.146) 0:19:26.917 ****** 2026-02-17 06:06:40.640361 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640372 | orchestrator | 2026-02-17 06:06:40.640383 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:06:40.640394 | orchestrator | Tuesday 17 February 2026 06:06:12 +0000 (0:00:01.155) 0:19:28.073 ****** 2026-02-17 06:06:40.640405 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640416 | orchestrator | 2026-02-17 06:06:40.640427 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:06:40.640438 | orchestrator | Tuesday 17 February 2026 06:06:13 +0000 (0:00:01.166) 0:19:29.239 ****** 2026-02-17 06:06:40.640450 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640460 | orchestrator | 2026-02-17 06:06:40.640471 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:06:40.640482 | orchestrator | Tuesday 17 February 2026 06:06:15 +0000 (0:00:01.157) 0:19:30.396 ****** 2026-02-17 06:06:40.640493 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640513 | orchestrator | 2026-02-17 06:06:40.640525 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:06:40.640536 | orchestrator | Tuesday 17 February 2026 06:06:16 +0000 (0:00:01.203) 0:19:31.600 ****** 2026-02-17 06:06:40.640547 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640558 | orchestrator | 2026-02-17 06:06:40.640587 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:06:40.640599 | orchestrator | Tuesday 17 February 2026 06:06:17 +0000 (0:00:01.133) 0:19:32.734 ****** 2026-02-17 06:06:40.640610 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640621 | orchestrator | 2026-02-17 06:06:40.640632 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:06:40.640643 | orchestrator | Tuesday 17 February 2026 06:06:18 +0000 (0:00:01.167) 0:19:33.901 ****** 2026-02-17 06:06:40.640654 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640665 | orchestrator | 2026-02-17 06:06:40.640676 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:06:40.640686 | orchestrator | Tuesday 17 February 2026 06:06:19 +0000 (0:00:01.151) 0:19:35.053 ****** 2026-02-17 06:06:40.640697 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640708 | orchestrator | 2026-02-17 06:06:40.640719 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:06:40.640730 | orchestrator | Tuesday 17 February 2026 06:06:20 +0000 (0:00:01.216) 0:19:36.270 ****** 2026-02-17 06:06:40.640741 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640752 | orchestrator | 2026-02-17 06:06:40.640763 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:06:40.640774 | orchestrator | Tuesday 17 February 2026 06:06:22 +0000 (0:00:01.167) 0:19:37.438 ****** 2026-02-17 06:06:40.640785 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640796 | orchestrator | 2026-02-17 06:06:40.640806 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:06:40.640817 | orchestrator | Tuesday 17 February 2026 06:06:23 +0000 (0:00:01.139) 0:19:38.578 ****** 2026-02-17 06:06:40.640828 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640855 | orchestrator | 2026-02-17 06:06:40.640877 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:06:40.640889 | orchestrator | Tuesday 17 February 2026 06:06:24 +0000 (0:00:01.144) 0:19:39.722 ****** 2026-02-17 06:06:40.640900 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640911 | orchestrator | 2026-02-17 06:06:40.640922 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:06:40.640933 | orchestrator | Tuesday 17 February 2026 06:06:25 +0000 (0:00:01.129) 0:19:40.852 ****** 2026-02-17 06:06:40.640944 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.640954 | orchestrator | 2026-02-17 06:06:40.640966 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:06:40.640983 | orchestrator | Tuesday 17 February 2026 06:06:26 +0000 (0:00:01.169) 0:19:42.021 ****** 2026-02-17 06:06:40.640995 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641006 | orchestrator | 2026-02-17 06:06:40.641017 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:06:40.641028 | orchestrator | Tuesday 17 February 2026 06:06:27 +0000 (0:00:01.128) 0:19:43.149 ****** 2026-02-17 06:06:40.641039 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641050 | orchestrator | 2026-02-17 06:06:40.641061 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:06:40.641072 | orchestrator | Tuesday 17 February 2026 06:06:29 +0000 (0:00:01.172) 0:19:44.322 ****** 2026-02-17 06:06:40.641082 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641093 | orchestrator | 2026-02-17 06:06:40.641104 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:06:40.641115 | orchestrator | Tuesday 17 February 2026 06:06:30 +0000 (0:00:01.138) 0:19:45.461 ****** 2026-02-17 06:06:40.641133 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641144 | orchestrator | 2026-02-17 06:06:40.641155 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:06:40.641166 | orchestrator | Tuesday 17 February 2026 06:06:31 +0000 (0:00:01.142) 0:19:46.604 ****** 2026-02-17 06:06:40.641177 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641188 | orchestrator | 2026-02-17 06:06:40.641241 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:06:40.641254 | orchestrator | Tuesday 17 February 2026 06:06:32 +0000 (0:00:01.171) 0:19:47.775 ****** 2026-02-17 06:06:40.641265 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641276 | orchestrator | 2026-02-17 06:06:40.641287 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:06:40.641298 | orchestrator | Tuesday 17 February 2026 06:06:33 +0000 (0:00:01.131) 0:19:48.907 ****** 2026-02-17 06:06:40.641309 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641320 | orchestrator | 2026-02-17 06:06:40.641331 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:06:40.641342 | orchestrator | Tuesday 17 February 2026 06:06:34 +0000 (0:00:01.178) 0:19:50.085 ****** 2026-02-17 06:06:40.641353 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641364 | orchestrator | 2026-02-17 06:06:40.641375 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:06:40.641386 | orchestrator | Tuesday 17 February 2026 06:06:35 +0000 (0:00:01.174) 0:19:51.260 ****** 2026-02-17 06:06:40.641397 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641408 | orchestrator | 2026-02-17 06:06:40.641419 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:06:40.641430 | orchestrator | Tuesday 17 February 2026 06:06:37 +0000 (0:00:01.129) 0:19:52.389 ****** 2026-02-17 06:06:40.641441 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641452 | orchestrator | 2026-02-17 06:06:40.641463 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:06:40.641474 | orchestrator | Tuesday 17 February 2026 06:06:38 +0000 (0:00:01.163) 0:19:53.553 ****** 2026-02-17 06:06:40.641485 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641496 | orchestrator | 2026-02-17 06:06:40.641507 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:06:40.641518 | orchestrator | Tuesday 17 February 2026 06:06:39 +0000 (0:00:01.187) 0:19:54.740 ****** 2026-02-17 06:06:40.641529 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:06:40.641540 | orchestrator | 2026-02-17 06:06:40.641551 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:06:40.641570 | orchestrator | Tuesday 17 February 2026 06:06:40 +0000 (0:00:01.153) 0:19:55.894 ****** 2026-02-17 06:07:23.974819 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.974936 | orchestrator | 2026-02-17 06:07:23.974952 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:07:23.974965 | orchestrator | Tuesday 17 February 2026 06:06:41 +0000 (0:00:01.125) 0:19:57.019 ****** 2026-02-17 06:07:23.974977 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.974989 | orchestrator | 2026-02-17 06:07:23.975000 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:07:23.975012 | orchestrator | Tuesday 17 February 2026 06:06:42 +0000 (0:00:01.180) 0:19:58.200 ****** 2026-02-17 06:07:23.975023 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975034 | orchestrator | 2026-02-17 06:07:23.975045 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:07:23.975056 | orchestrator | Tuesday 17 February 2026 06:06:44 +0000 (0:00:01.234) 0:19:59.435 ****** 2026-02-17 06:07:23.975067 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975078 | orchestrator | 2026-02-17 06:07:23.975089 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:07:23.975100 | orchestrator | Tuesday 17 February 2026 06:06:45 +0000 (0:00:01.182) 0:20:00.617 ****** 2026-02-17 06:07:23.975136 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975148 | orchestrator | 2026-02-17 06:07:23.975160 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:07:23.975171 | orchestrator | Tuesday 17 February 2026 06:06:46 +0000 (0:00:01.157) 0:20:01.775 ****** 2026-02-17 06:07:23.975181 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975192 | orchestrator | 2026-02-17 06:07:23.975204 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:07:23.975215 | orchestrator | Tuesday 17 February 2026 06:06:47 +0000 (0:00:01.170) 0:20:02.945 ****** 2026-02-17 06:07:23.975225 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975272 | orchestrator | 2026-02-17 06:07:23.975284 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:07:23.975295 | orchestrator | Tuesday 17 February 2026 06:06:48 +0000 (0:00:01.111) 0:20:04.056 ****** 2026-02-17 06:07:23.975306 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975317 | orchestrator | 2026-02-17 06:07:23.975328 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:07:23.975341 | orchestrator | Tuesday 17 February 2026 06:06:49 +0000 (0:00:01.193) 0:20:05.250 ****** 2026-02-17 06:07:23.975370 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975383 | orchestrator | 2026-02-17 06:07:23.975395 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:07:23.975409 | orchestrator | Tuesday 17 February 2026 06:06:51 +0000 (0:00:01.166) 0:20:06.417 ****** 2026-02-17 06:07:23.975421 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975433 | orchestrator | 2026-02-17 06:07:23.975447 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:07:23.975460 | orchestrator | Tuesday 17 February 2026 06:06:52 +0000 (0:00:01.118) 0:20:07.535 ****** 2026-02-17 06:07:23.975472 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975485 | orchestrator | 2026-02-17 06:07:23.975499 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:07:23.975511 | orchestrator | Tuesday 17 February 2026 06:06:53 +0000 (0:00:01.179) 0:20:08.715 ****** 2026-02-17 06:07:23.975525 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975537 | orchestrator | 2026-02-17 06:07:23.975551 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:07:23.975565 | orchestrator | Tuesday 17 February 2026 06:06:54 +0000 (0:00:01.123) 0:20:09.839 ****** 2026-02-17 06:07:23.975583 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975602 | orchestrator | 2026-02-17 06:07:23.975620 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:07:23.975635 | orchestrator | Tuesday 17 February 2026 06:06:55 +0000 (0:00:01.161) 0:20:11.001 ****** 2026-02-17 06:07:23.975650 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975665 | orchestrator | 2026-02-17 06:07:23.975694 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:07:23.975714 | orchestrator | Tuesday 17 February 2026 06:06:56 +0000 (0:00:01.142) 0:20:12.143 ****** 2026-02-17 06:07:23.975732 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975782 | orchestrator | 2026-02-17 06:07:23.975833 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:07:23.975853 | orchestrator | Tuesday 17 February 2026 06:06:58 +0000 (0:00:01.300) 0:20:13.443 ****** 2026-02-17 06:07:23.975871 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975889 | orchestrator | 2026-02-17 06:07:23.975907 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:07:23.975925 | orchestrator | Tuesday 17 February 2026 06:06:59 +0000 (0:00:01.150) 0:20:14.594 ****** 2026-02-17 06:07:23.975941 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.975959 | orchestrator | 2026-02-17 06:07:23.975977 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:07:23.976010 | orchestrator | Tuesday 17 February 2026 06:07:00 +0000 (0:00:01.230) 0:20:15.824 ****** 2026-02-17 06:07:23.976030 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976049 | orchestrator | 2026-02-17 06:07:23.976066 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:07:23.976085 | orchestrator | Tuesday 17 February 2026 06:07:01 +0000 (0:00:01.156) 0:20:16.981 ****** 2026-02-17 06:07:23.976103 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976120 | orchestrator | 2026-02-17 06:07:23.976138 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:07:23.976151 | orchestrator | Tuesday 17 February 2026 06:07:02 +0000 (0:00:01.148) 0:20:18.130 ****** 2026-02-17 06:07:23.976167 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976185 | orchestrator | 2026-02-17 06:07:23.976204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:07:23.976272 | orchestrator | Tuesday 17 February 2026 06:07:04 +0000 (0:00:01.142) 0:20:19.272 ****** 2026-02-17 06:07:23.976296 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976313 | orchestrator | 2026-02-17 06:07:23.976331 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:07:23.976348 | orchestrator | Tuesday 17 February 2026 06:07:05 +0000 (0:00:01.132) 0:20:20.405 ****** 2026-02-17 06:07:23.976363 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976378 | orchestrator | 2026-02-17 06:07:23.976396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:07:23.976415 | orchestrator | Tuesday 17 February 2026 06:07:06 +0000 (0:00:01.200) 0:20:21.606 ****** 2026-02-17 06:07:23.976433 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976452 | orchestrator | 2026-02-17 06:07:23.976471 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:07:23.976488 | orchestrator | Tuesday 17 February 2026 06:07:07 +0000 (0:00:01.216) 0:20:22.823 ****** 2026-02-17 06:07:23.976506 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 06:07:23.976525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 06:07:23.976544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 06:07:23.976563 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976582 | orchestrator | 2026-02-17 06:07:23.976600 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:07:23.976619 | orchestrator | Tuesday 17 February 2026 06:07:09 +0000 (0:00:01.857) 0:20:24.680 ****** 2026-02-17 06:07:23.976638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 06:07:23.976656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 06:07:23.976672 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 06:07:23.976684 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976695 | orchestrator | 2026-02-17 06:07:23.976705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:07:23.976717 | orchestrator | Tuesday 17 February 2026 06:07:11 +0000 (0:00:01.861) 0:20:26.542 ****** 2026-02-17 06:07:23.976727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 06:07:23.976738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 06:07:23.976760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 06:07:23.976776 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976795 | orchestrator | 2026-02-17 06:07:23.976812 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:07:23.976829 | orchestrator | Tuesday 17 February 2026 06:07:12 +0000 (0:00:01.409) 0:20:27.951 ****** 2026-02-17 06:07:23.976848 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976867 | orchestrator | 2026-02-17 06:07:23.976886 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:07:23.976920 | orchestrator | Tuesday 17 February 2026 06:07:13 +0000 (0:00:01.188) 0:20:29.139 ****** 2026-02-17 06:07:23.976939 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-17 06:07:23.976958 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.976970 | orchestrator | 2026-02-17 06:07:23.976981 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:07:23.976992 | orchestrator | Tuesday 17 February 2026 06:07:15 +0000 (0:00:01.448) 0:20:30.588 ****** 2026-02-17 06:07:23.977003 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.977014 | orchestrator | 2026-02-17 06:07:23.977025 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:07:23.977036 | orchestrator | Tuesday 17 February 2026 06:07:16 +0000 (0:00:01.155) 0:20:31.743 ****** 2026-02-17 06:07:23.977047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 06:07:23.977058 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 06:07:23.977069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 06:07:23.977079 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.977090 | orchestrator | 2026-02-17 06:07:23.977101 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-17 06:07:23.977112 | orchestrator | Tuesday 17 February 2026 06:07:17 +0000 (0:00:01.412) 0:20:33.156 ****** 2026-02-17 06:07:23.977123 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.977134 | orchestrator | 2026-02-17 06:07:23.977145 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-17 06:07:23.977157 | orchestrator | Tuesday 17 February 2026 06:07:19 +0000 (0:00:01.167) 0:20:34.323 ****** 2026-02-17 06:07:23.977168 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.977178 | orchestrator | 2026-02-17 06:07:23.977189 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-17 06:07:23.977200 | orchestrator | Tuesday 17 February 2026 06:07:20 +0000 (0:00:01.124) 0:20:35.448 ****** 2026-02-17 06:07:23.977211 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.977222 | orchestrator | 2026-02-17 06:07:23.977262 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-17 06:07:23.977274 | orchestrator | Tuesday 17 February 2026 06:07:21 +0000 (0:00:01.121) 0:20:36.570 ****** 2026-02-17 06:07:23.977285 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:07:23.977304 | orchestrator | 2026-02-17 06:07:23.977322 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-17 06:07:23.977339 | orchestrator | 2026-02-17 06:07:23.977357 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-17 06:07:23.977375 | orchestrator | Tuesday 17 February 2026 06:07:22 +0000 (0:00:01.018) 0:20:37.588 ****** 2026-02-17 06:07:23.977394 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:23.977413 | orchestrator | 2026-02-17 06:07:23.977433 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:07:23.977450 | orchestrator | Tuesday 17 February 2026 06:07:23 +0000 (0:00:00.850) 0:20:38.438 ****** 2026-02-17 06:07:23.977470 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:23.977482 | orchestrator | 2026-02-17 06:07:23.977505 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:07:56.681492 | orchestrator | Tuesday 17 February 2026 06:07:23 +0000 (0:00:00.790) 0:20:39.229 ****** 2026-02-17 06:07:56.681610 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.681644 | orchestrator | 2026-02-17 06:07:56.681666 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:07:56.681683 | orchestrator | Tuesday 17 February 2026 06:07:24 +0000 (0:00:00.833) 0:20:40.063 ****** 2026-02-17 06:07:56.681701 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.681718 | orchestrator | 2026-02-17 06:07:56.681735 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:07:56.681751 | orchestrator | Tuesday 17 February 2026 06:07:25 +0000 (0:00:00.808) 0:20:40.872 ****** 2026-02-17 06:07:56.681800 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.681818 | orchestrator | 2026-02-17 06:07:56.681835 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:07:56.681854 | orchestrator | Tuesday 17 February 2026 06:07:26 +0000 (0:00:00.782) 0:20:41.655 ****** 2026-02-17 06:07:56.681875 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.681892 | orchestrator | 2026-02-17 06:07:56.681912 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:07:56.681925 | orchestrator | Tuesday 17 February 2026 06:07:27 +0000 (0:00:00.800) 0:20:42.455 ****** 2026-02-17 06:07:56.681936 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.681947 | orchestrator | 2026-02-17 06:07:56.681958 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:07:56.681969 | orchestrator | Tuesday 17 February 2026 06:07:27 +0000 (0:00:00.757) 0:20:43.213 ****** 2026-02-17 06:07:56.681980 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.681992 | orchestrator | 2026-02-17 06:07:56.682002 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:07:56.682079 | orchestrator | Tuesday 17 February 2026 06:07:28 +0000 (0:00:00.792) 0:20:44.006 ****** 2026-02-17 06:07:56.682094 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682107 | orchestrator | 2026-02-17 06:07:56.682121 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:07:56.682134 | orchestrator | Tuesday 17 February 2026 06:07:29 +0000 (0:00:00.793) 0:20:44.799 ****** 2026-02-17 06:07:56.682158 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682170 | orchestrator | 2026-02-17 06:07:56.682182 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:07:56.682210 | orchestrator | Tuesday 17 February 2026 06:07:30 +0000 (0:00:00.810) 0:20:45.610 ****** 2026-02-17 06:07:56.682223 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682236 | orchestrator | 2026-02-17 06:07:56.682248 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:07:56.682287 | orchestrator | Tuesday 17 February 2026 06:07:31 +0000 (0:00:00.775) 0:20:46.386 ****** 2026-02-17 06:07:56.682300 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682313 | orchestrator | 2026-02-17 06:07:56.682326 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:07:56.682338 | orchestrator | Tuesday 17 February 2026 06:07:32 +0000 (0:00:00.953) 0:20:47.339 ****** 2026-02-17 06:07:56.682351 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682364 | orchestrator | 2026-02-17 06:07:56.682376 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:07:56.682386 | orchestrator | Tuesday 17 February 2026 06:07:32 +0000 (0:00:00.815) 0:20:48.155 ****** 2026-02-17 06:07:56.682398 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682409 | orchestrator | 2026-02-17 06:07:56.682421 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:07:56.682432 | orchestrator | Tuesday 17 February 2026 06:07:33 +0000 (0:00:00.812) 0:20:48.968 ****** 2026-02-17 06:07:56.682443 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682454 | orchestrator | 2026-02-17 06:07:56.682465 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:07:56.682475 | orchestrator | Tuesday 17 February 2026 06:07:34 +0000 (0:00:00.789) 0:20:49.758 ****** 2026-02-17 06:07:56.682486 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682497 | orchestrator | 2026-02-17 06:07:56.682508 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:07:56.682519 | orchestrator | Tuesday 17 February 2026 06:07:35 +0000 (0:00:00.782) 0:20:50.541 ****** 2026-02-17 06:07:56.682530 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682541 | orchestrator | 2026-02-17 06:07:56.682552 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:07:56.682563 | orchestrator | Tuesday 17 February 2026 06:07:36 +0000 (0:00:00.835) 0:20:51.376 ****** 2026-02-17 06:07:56.682585 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682596 | orchestrator | 2026-02-17 06:07:56.682607 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:07:56.682618 | orchestrator | Tuesday 17 February 2026 06:07:36 +0000 (0:00:00.792) 0:20:52.169 ****** 2026-02-17 06:07:56.682629 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682640 | orchestrator | 2026-02-17 06:07:56.682651 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:07:56.682663 | orchestrator | Tuesday 17 February 2026 06:07:37 +0000 (0:00:00.878) 0:20:53.048 ****** 2026-02-17 06:07:56.682673 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682684 | orchestrator | 2026-02-17 06:07:56.682695 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:07:56.682706 | orchestrator | Tuesday 17 February 2026 06:07:38 +0000 (0:00:00.821) 0:20:53.870 ****** 2026-02-17 06:07:56.682717 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682728 | orchestrator | 2026-02-17 06:07:56.682739 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:07:56.682750 | orchestrator | Tuesday 17 February 2026 06:07:39 +0000 (0:00:00.815) 0:20:54.685 ****** 2026-02-17 06:07:56.682761 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682772 | orchestrator | 2026-02-17 06:07:56.682783 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:07:56.682814 | orchestrator | Tuesday 17 February 2026 06:07:40 +0000 (0:00:00.830) 0:20:55.516 ****** 2026-02-17 06:07:56.682826 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682837 | orchestrator | 2026-02-17 06:07:56.682848 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:07:56.682859 | orchestrator | Tuesday 17 February 2026 06:07:41 +0000 (0:00:00.790) 0:20:56.307 ****** 2026-02-17 06:07:56.682870 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682880 | orchestrator | 2026-02-17 06:07:56.682891 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:07:56.682902 | orchestrator | Tuesday 17 February 2026 06:07:41 +0000 (0:00:00.961) 0:20:57.269 ****** 2026-02-17 06:07:56.682913 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682924 | orchestrator | 2026-02-17 06:07:56.682935 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:07:56.682946 | orchestrator | Tuesday 17 February 2026 06:07:42 +0000 (0:00:00.853) 0:20:58.122 ****** 2026-02-17 06:07:56.682957 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.682968 | orchestrator | 2026-02-17 06:07:56.682979 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:07:56.682990 | orchestrator | Tuesday 17 February 2026 06:07:43 +0000 (0:00:00.818) 0:20:58.941 ****** 2026-02-17 06:07:56.683001 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683012 | orchestrator | 2026-02-17 06:07:56.683023 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:07:56.683034 | orchestrator | Tuesday 17 February 2026 06:07:44 +0000 (0:00:00.785) 0:20:59.726 ****** 2026-02-17 06:07:56.683045 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683056 | orchestrator | 2026-02-17 06:07:56.683067 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:07:56.683078 | orchestrator | Tuesday 17 February 2026 06:07:45 +0000 (0:00:00.877) 0:21:00.604 ****** 2026-02-17 06:07:56.683089 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683099 | orchestrator | 2026-02-17 06:07:56.683115 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:07:56.683133 | orchestrator | Tuesday 17 February 2026 06:07:46 +0000 (0:00:00.833) 0:21:01.438 ****** 2026-02-17 06:07:56.683160 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683180 | orchestrator | 2026-02-17 06:07:56.683196 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:07:56.683214 | orchestrator | Tuesday 17 February 2026 06:07:46 +0000 (0:00:00.795) 0:21:02.233 ****** 2026-02-17 06:07:56.683250 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683305 | orchestrator | 2026-02-17 06:07:56.683322 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:07:56.683338 | orchestrator | Tuesday 17 February 2026 06:07:47 +0000 (0:00:00.796) 0:21:03.029 ****** 2026-02-17 06:07:56.683355 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683371 | orchestrator | 2026-02-17 06:07:56.683389 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:07:56.683406 | orchestrator | Tuesday 17 February 2026 06:07:48 +0000 (0:00:00.817) 0:21:03.847 ****** 2026-02-17 06:07:56.683424 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683443 | orchestrator | 2026-02-17 06:07:56.683462 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:07:56.683480 | orchestrator | Tuesday 17 February 2026 06:07:49 +0000 (0:00:00.817) 0:21:04.665 ****** 2026-02-17 06:07:56.683497 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683532 | orchestrator | 2026-02-17 06:07:56.683543 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:07:56.683554 | orchestrator | Tuesday 17 February 2026 06:07:50 +0000 (0:00:00.801) 0:21:05.467 ****** 2026-02-17 06:07:56.683565 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683576 | orchestrator | 2026-02-17 06:07:56.683600 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:07:56.683611 | orchestrator | Tuesday 17 February 2026 06:07:51 +0000 (0:00:00.839) 0:21:06.306 ****** 2026-02-17 06:07:56.683621 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683632 | orchestrator | 2026-02-17 06:07:56.683643 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:07:56.683654 | orchestrator | Tuesday 17 February 2026 06:07:51 +0000 (0:00:00.846) 0:21:07.153 ****** 2026-02-17 06:07:56.683665 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683676 | orchestrator | 2026-02-17 06:07:56.683687 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:07:56.683698 | orchestrator | Tuesday 17 February 2026 06:07:52 +0000 (0:00:00.779) 0:21:07.933 ****** 2026-02-17 06:07:56.683709 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683719 | orchestrator | 2026-02-17 06:07:56.683730 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:07:56.683741 | orchestrator | Tuesday 17 February 2026 06:07:53 +0000 (0:00:00.825) 0:21:08.758 ****** 2026-02-17 06:07:56.683752 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683763 | orchestrator | 2026-02-17 06:07:56.683774 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:07:56.683786 | orchestrator | Tuesday 17 February 2026 06:07:54 +0000 (0:00:00.774) 0:21:09.532 ****** 2026-02-17 06:07:56.683797 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683808 | orchestrator | 2026-02-17 06:07:56.683819 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:07:56.683830 | orchestrator | Tuesday 17 February 2026 06:07:55 +0000 (0:00:00.798) 0:21:10.331 ****** 2026-02-17 06:07:56.683841 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683852 | orchestrator | 2026-02-17 06:07:56.683863 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:07:56.683874 | orchestrator | Tuesday 17 February 2026 06:07:55 +0000 (0:00:00.806) 0:21:11.138 ****** 2026-02-17 06:07:56.683885 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:07:56.683895 | orchestrator | 2026-02-17 06:07:56.683906 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:07:56.683933 | orchestrator | Tuesday 17 February 2026 06:07:56 +0000 (0:00:00.797) 0:21:11.936 ****** 2026-02-17 06:08:27.416426 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416539 | orchestrator | 2026-02-17 06:08:27.416572 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:08:27.416609 | orchestrator | Tuesday 17 February 2026 06:07:57 +0000 (0:00:00.786) 0:21:12.722 ****** 2026-02-17 06:08:27.416621 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416632 | orchestrator | 2026-02-17 06:08:27.416643 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:08:27.416654 | orchestrator | Tuesday 17 February 2026 06:07:58 +0000 (0:00:00.769) 0:21:13.492 ****** 2026-02-17 06:08:27.416665 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416676 | orchestrator | 2026-02-17 06:08:27.416687 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:08:27.416698 | orchestrator | Tuesday 17 February 2026 06:07:58 +0000 (0:00:00.764) 0:21:14.257 ****** 2026-02-17 06:08:27.416709 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416720 | orchestrator | 2026-02-17 06:08:27.416731 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:08:27.416742 | orchestrator | Tuesday 17 February 2026 06:07:59 +0000 (0:00:00.876) 0:21:15.133 ****** 2026-02-17 06:08:27.416753 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416764 | orchestrator | 2026-02-17 06:08:27.416774 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:08:27.416785 | orchestrator | Tuesday 17 February 2026 06:08:00 +0000 (0:00:00.836) 0:21:15.970 ****** 2026-02-17 06:08:27.416797 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416808 | orchestrator | 2026-02-17 06:08:27.416819 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:08:27.416830 | orchestrator | Tuesday 17 February 2026 06:08:01 +0000 (0:00:00.879) 0:21:16.849 ****** 2026-02-17 06:08:27.416841 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416851 | orchestrator | 2026-02-17 06:08:27.416862 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:08:27.416873 | orchestrator | Tuesday 17 February 2026 06:08:02 +0000 (0:00:00.807) 0:21:17.657 ****** 2026-02-17 06:08:27.416884 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416895 | orchestrator | 2026-02-17 06:08:27.416906 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:08:27.416933 | orchestrator | Tuesday 17 February 2026 06:08:03 +0000 (0:00:00.844) 0:21:18.502 ****** 2026-02-17 06:08:27.416945 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416956 | orchestrator | 2026-02-17 06:08:27.416967 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:08:27.416978 | orchestrator | Tuesday 17 February 2026 06:08:04 +0000 (0:00:00.781) 0:21:19.284 ****** 2026-02-17 06:08:27.416988 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.416999 | orchestrator | 2026-02-17 06:08:27.417010 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:08:27.417021 | orchestrator | Tuesday 17 February 2026 06:08:04 +0000 (0:00:00.820) 0:21:20.104 ****** 2026-02-17 06:08:27.417032 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417043 | orchestrator | 2026-02-17 06:08:27.417054 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:08:27.417065 | orchestrator | Tuesday 17 February 2026 06:08:05 +0000 (0:00:00.796) 0:21:20.900 ****** 2026-02-17 06:08:27.417076 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417086 | orchestrator | 2026-02-17 06:08:27.417097 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:08:27.417108 | orchestrator | Tuesday 17 February 2026 06:08:06 +0000 (0:00:00.810) 0:21:21.711 ****** 2026-02-17 06:08:27.417119 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 06:08:27.417130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 06:08:27.417141 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 06:08:27.417152 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417171 | orchestrator | 2026-02-17 06:08:27.417183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:08:27.417193 | orchestrator | Tuesday 17 February 2026 06:08:07 +0000 (0:00:01.074) 0:21:22.786 ****** 2026-02-17 06:08:27.417204 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 06:08:27.417215 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 06:08:27.417226 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 06:08:27.417237 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417247 | orchestrator | 2026-02-17 06:08:27.417258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:08:27.417269 | orchestrator | Tuesday 17 February 2026 06:08:08 +0000 (0:00:01.086) 0:21:23.872 ****** 2026-02-17 06:08:27.417295 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 06:08:27.417307 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 06:08:27.417318 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 06:08:27.417329 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417339 | orchestrator | 2026-02-17 06:08:27.417350 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:08:27.417361 | orchestrator | Tuesday 17 February 2026 06:08:09 +0000 (0:00:01.060) 0:21:24.933 ****** 2026-02-17 06:08:27.417372 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417383 | orchestrator | 2026-02-17 06:08:27.417393 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:08:27.417404 | orchestrator | Tuesday 17 February 2026 06:08:10 +0000 (0:00:00.794) 0:21:25.727 ****** 2026-02-17 06:08:27.417416 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-17 06:08:27.417427 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417437 | orchestrator | 2026-02-17 06:08:27.417448 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:08:27.417476 | orchestrator | Tuesday 17 February 2026 06:08:11 +0000 (0:00:01.012) 0:21:26.740 ****** 2026-02-17 06:08:27.417487 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417498 | orchestrator | 2026-02-17 06:08:27.417509 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:08:27.417520 | orchestrator | Tuesday 17 February 2026 06:08:12 +0000 (0:00:00.911) 0:21:27.651 ****** 2026-02-17 06:08:27.417531 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 06:08:27.417541 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 06:08:27.417552 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 06:08:27.417563 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417573 | orchestrator | 2026-02-17 06:08:27.417584 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-17 06:08:27.417595 | orchestrator | Tuesday 17 February 2026 06:08:13 +0000 (0:00:01.108) 0:21:28.760 ****** 2026-02-17 06:08:27.417606 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417617 | orchestrator | 2026-02-17 06:08:27.417627 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-17 06:08:27.417638 | orchestrator | Tuesday 17 February 2026 06:08:14 +0000 (0:00:00.757) 0:21:29.517 ****** 2026-02-17 06:08:27.417649 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417660 | orchestrator | 2026-02-17 06:08:27.417671 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-17 06:08:27.417681 | orchestrator | Tuesday 17 February 2026 06:08:15 +0000 (0:00:00.861) 0:21:30.378 ****** 2026-02-17 06:08:27.417692 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417703 | orchestrator | 2026-02-17 06:08:27.417714 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-17 06:08:27.417725 | orchestrator | Tuesday 17 February 2026 06:08:15 +0000 (0:00:00.832) 0:21:31.210 ****** 2026-02-17 06:08:27.417736 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:08:27.417746 | orchestrator | 2026-02-17 06:08:27.417766 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-17 06:08:27.417777 | orchestrator | 2026-02-17 06:08:27.417788 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-17 06:08:27.417799 | orchestrator | Tuesday 17 February 2026 06:08:16 +0000 (0:00:01.021) 0:21:32.232 ****** 2026-02-17 06:08:27.417810 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.417821 | orchestrator | 2026-02-17 06:08:27.417836 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:08:27.417848 | orchestrator | Tuesday 17 February 2026 06:08:17 +0000 (0:00:00.841) 0:21:33.073 ****** 2026-02-17 06:08:27.417859 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.417869 | orchestrator | 2026-02-17 06:08:27.417880 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:08:27.417891 | orchestrator | Tuesday 17 February 2026 06:08:18 +0000 (0:00:00.814) 0:21:33.887 ****** 2026-02-17 06:08:27.417902 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.417912 | orchestrator | 2026-02-17 06:08:27.417923 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:08:27.417934 | orchestrator | Tuesday 17 February 2026 06:08:19 +0000 (0:00:00.782) 0:21:34.669 ****** 2026-02-17 06:08:27.417945 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.417956 | orchestrator | 2026-02-17 06:08:27.417966 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:08:27.417977 | orchestrator | Tuesday 17 February 2026 06:08:20 +0000 (0:00:00.825) 0:21:35.494 ****** 2026-02-17 06:08:27.417988 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.417998 | orchestrator | 2026-02-17 06:08:27.418009 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:08:27.418084 | orchestrator | Tuesday 17 February 2026 06:08:21 +0000 (0:00:00.783) 0:21:36.278 ****** 2026-02-17 06:08:27.418096 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.418106 | orchestrator | 2026-02-17 06:08:27.418117 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:08:27.418128 | orchestrator | Tuesday 17 February 2026 06:08:21 +0000 (0:00:00.850) 0:21:37.129 ****** 2026-02-17 06:08:27.418139 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.418150 | orchestrator | 2026-02-17 06:08:27.418161 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:08:27.418172 | orchestrator | Tuesday 17 February 2026 06:08:22 +0000 (0:00:00.801) 0:21:37.930 ****** 2026-02-17 06:08:27.418182 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.418193 | orchestrator | 2026-02-17 06:08:27.418204 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:08:27.418215 | orchestrator | Tuesday 17 February 2026 06:08:23 +0000 (0:00:00.788) 0:21:38.719 ****** 2026-02-17 06:08:27.418226 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.418237 | orchestrator | 2026-02-17 06:08:27.418247 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:08:27.418258 | orchestrator | Tuesday 17 February 2026 06:08:24 +0000 (0:00:00.779) 0:21:39.499 ****** 2026-02-17 06:08:27.418269 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.418325 | orchestrator | 2026-02-17 06:08:27.418337 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:08:27.418348 | orchestrator | Tuesday 17 February 2026 06:08:25 +0000 (0:00:00.777) 0:21:40.276 ****** 2026-02-17 06:08:27.418359 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.418370 | orchestrator | 2026-02-17 06:08:27.418380 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:08:27.418391 | orchestrator | Tuesday 17 February 2026 06:08:25 +0000 (0:00:00.806) 0:21:41.083 ****** 2026-02-17 06:08:27.418402 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.418413 | orchestrator | 2026-02-17 06:08:27.418424 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:08:27.418435 | orchestrator | Tuesday 17 February 2026 06:08:26 +0000 (0:00:00.785) 0:21:41.868 ****** 2026-02-17 06:08:27.418454 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:27.418466 | orchestrator | 2026-02-17 06:08:27.418476 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:08:27.418495 | orchestrator | Tuesday 17 February 2026 06:08:27 +0000 (0:00:00.807) 0:21:42.676 ****** 2026-02-17 06:08:59.966074 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966191 | orchestrator | 2026-02-17 06:08:59.966209 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:08:59.966223 | orchestrator | Tuesday 17 February 2026 06:08:28 +0000 (0:00:00.775) 0:21:43.452 ****** 2026-02-17 06:08:59.966234 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966246 | orchestrator | 2026-02-17 06:08:59.966257 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:08:59.966268 | orchestrator | Tuesday 17 February 2026 06:08:28 +0000 (0:00:00.797) 0:21:44.249 ****** 2026-02-17 06:08:59.966279 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966291 | orchestrator | 2026-02-17 06:08:59.966372 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:08:59.966396 | orchestrator | Tuesday 17 February 2026 06:08:29 +0000 (0:00:00.791) 0:21:45.041 ****** 2026-02-17 06:08:59.966416 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966428 | orchestrator | 2026-02-17 06:08:59.966440 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:08:59.966451 | orchestrator | Tuesday 17 February 2026 06:08:30 +0000 (0:00:00.787) 0:21:45.829 ****** 2026-02-17 06:08:59.966462 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966473 | orchestrator | 2026-02-17 06:08:59.966484 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:08:59.966496 | orchestrator | Tuesday 17 February 2026 06:08:31 +0000 (0:00:00.841) 0:21:46.670 ****** 2026-02-17 06:08:59.966507 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966518 | orchestrator | 2026-02-17 06:08:59.966529 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:08:59.966541 | orchestrator | Tuesday 17 February 2026 06:08:32 +0000 (0:00:00.797) 0:21:47.468 ****** 2026-02-17 06:08:59.966552 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966563 | orchestrator | 2026-02-17 06:08:59.966577 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:08:59.966590 | orchestrator | Tuesday 17 February 2026 06:08:33 +0000 (0:00:00.818) 0:21:48.286 ****** 2026-02-17 06:08:59.966602 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966616 | orchestrator | 2026-02-17 06:08:59.966629 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:08:59.966659 | orchestrator | Tuesday 17 February 2026 06:08:33 +0000 (0:00:00.797) 0:21:49.084 ****** 2026-02-17 06:08:59.966672 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966685 | orchestrator | 2026-02-17 06:08:59.966698 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:08:59.966711 | orchestrator | Tuesday 17 February 2026 06:08:34 +0000 (0:00:00.817) 0:21:49.901 ****** 2026-02-17 06:08:59.966722 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966733 | orchestrator | 2026-02-17 06:08:59.966745 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:08:59.966756 | orchestrator | Tuesday 17 February 2026 06:08:35 +0000 (0:00:00.850) 0:21:50.751 ****** 2026-02-17 06:08:59.966767 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966778 | orchestrator | 2026-02-17 06:08:59.966789 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:08:59.966800 | orchestrator | Tuesday 17 February 2026 06:08:36 +0000 (0:00:00.801) 0:21:51.553 ****** 2026-02-17 06:08:59.966811 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966822 | orchestrator | 2026-02-17 06:08:59.966833 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:08:59.966870 | orchestrator | Tuesday 17 February 2026 06:08:37 +0000 (0:00:00.810) 0:21:52.364 ****** 2026-02-17 06:08:59.966881 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966892 | orchestrator | 2026-02-17 06:08:59.966903 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:08:59.966914 | orchestrator | Tuesday 17 February 2026 06:08:37 +0000 (0:00:00.802) 0:21:53.166 ****** 2026-02-17 06:08:59.966925 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966936 | orchestrator | 2026-02-17 06:08:59.966947 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:08:59.966958 | orchestrator | Tuesday 17 February 2026 06:08:38 +0000 (0:00:00.914) 0:21:54.080 ****** 2026-02-17 06:08:59.966969 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.966979 | orchestrator | 2026-02-17 06:08:59.966990 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:08:59.967001 | orchestrator | Tuesday 17 February 2026 06:08:39 +0000 (0:00:00.801) 0:21:54.882 ****** 2026-02-17 06:08:59.967012 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967024 | orchestrator | 2026-02-17 06:08:59.967035 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:08:59.967046 | orchestrator | Tuesday 17 February 2026 06:08:40 +0000 (0:00:00.809) 0:21:55.691 ****** 2026-02-17 06:08:59.967056 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967067 | orchestrator | 2026-02-17 06:08:59.967078 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:08:59.967090 | orchestrator | Tuesday 17 February 2026 06:08:41 +0000 (0:00:00.838) 0:21:56.530 ****** 2026-02-17 06:08:59.967101 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967111 | orchestrator | 2026-02-17 06:08:59.967122 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:08:59.967133 | orchestrator | Tuesday 17 February 2026 06:08:42 +0000 (0:00:00.822) 0:21:57.352 ****** 2026-02-17 06:08:59.967144 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967155 | orchestrator | 2026-02-17 06:08:59.967166 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:08:59.967177 | orchestrator | Tuesday 17 February 2026 06:08:42 +0000 (0:00:00.826) 0:21:58.179 ****** 2026-02-17 06:08:59.967188 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967199 | orchestrator | 2026-02-17 06:08:59.967210 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:08:59.967221 | orchestrator | Tuesday 17 February 2026 06:08:43 +0000 (0:00:00.807) 0:21:58.986 ****** 2026-02-17 06:08:59.967232 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967243 | orchestrator | 2026-02-17 06:08:59.967272 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:08:59.967283 | orchestrator | Tuesday 17 February 2026 06:08:44 +0000 (0:00:00.824) 0:21:59.811 ****** 2026-02-17 06:08:59.967294 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967340 | orchestrator | 2026-02-17 06:08:59.967352 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:08:59.967363 | orchestrator | Tuesday 17 February 2026 06:08:45 +0000 (0:00:00.809) 0:22:00.620 ****** 2026-02-17 06:08:59.967374 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967385 | orchestrator | 2026-02-17 06:08:59.967396 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:08:59.967407 | orchestrator | Tuesday 17 February 2026 06:08:46 +0000 (0:00:00.751) 0:22:01.372 ****** 2026-02-17 06:08:59.967418 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967428 | orchestrator | 2026-02-17 06:08:59.967439 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:08:59.967450 | orchestrator | Tuesday 17 February 2026 06:08:46 +0000 (0:00:00.789) 0:22:02.162 ****** 2026-02-17 06:08:59.967461 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967472 | orchestrator | 2026-02-17 06:08:59.967483 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:08:59.967504 | orchestrator | Tuesday 17 February 2026 06:08:47 +0000 (0:00:00.837) 0:22:03.000 ****** 2026-02-17 06:08:59.967515 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967526 | orchestrator | 2026-02-17 06:08:59.967537 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:08:59.967549 | orchestrator | Tuesday 17 February 2026 06:08:48 +0000 (0:00:00.874) 0:22:03.875 ****** 2026-02-17 06:08:59.967560 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967571 | orchestrator | 2026-02-17 06:08:59.967582 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:08:59.967593 | orchestrator | Tuesday 17 February 2026 06:08:49 +0000 (0:00:00.838) 0:22:04.714 ****** 2026-02-17 06:08:59.967604 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967615 | orchestrator | 2026-02-17 06:08:59.967626 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:08:59.967643 | orchestrator | Tuesday 17 February 2026 06:08:50 +0000 (0:00:00.791) 0:22:05.505 ****** 2026-02-17 06:08:59.967654 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967665 | orchestrator | 2026-02-17 06:08:59.967676 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:08:59.967688 | orchestrator | Tuesday 17 February 2026 06:08:51 +0000 (0:00:00.806) 0:22:06.312 ****** 2026-02-17 06:08:59.967699 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967710 | orchestrator | 2026-02-17 06:08:59.967721 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:08:59.967732 | orchestrator | Tuesday 17 February 2026 06:08:51 +0000 (0:00:00.821) 0:22:07.133 ****** 2026-02-17 06:08:59.967743 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967754 | orchestrator | 2026-02-17 06:08:59.967765 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:08:59.967776 | orchestrator | Tuesday 17 February 2026 06:08:52 +0000 (0:00:00.790) 0:22:07.925 ****** 2026-02-17 06:08:59.967787 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967798 | orchestrator | 2026-02-17 06:08:59.967808 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:08:59.967820 | orchestrator | Tuesday 17 February 2026 06:08:53 +0000 (0:00:00.769) 0:22:08.694 ****** 2026-02-17 06:08:59.967830 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967841 | orchestrator | 2026-02-17 06:08:59.967852 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:08:59.967863 | orchestrator | Tuesday 17 February 2026 06:08:54 +0000 (0:00:00.878) 0:22:09.573 ****** 2026-02-17 06:08:59.967874 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967885 | orchestrator | 2026-02-17 06:08:59.967895 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:08:59.967906 | orchestrator | Tuesday 17 February 2026 06:08:55 +0000 (0:00:00.824) 0:22:10.397 ****** 2026-02-17 06:08:59.967917 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967928 | orchestrator | 2026-02-17 06:08:59.967939 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:08:59.967950 | orchestrator | Tuesday 17 February 2026 06:08:56 +0000 (0:00:00.888) 0:22:11.285 ****** 2026-02-17 06:08:59.967961 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.967972 | orchestrator | 2026-02-17 06:08:59.967983 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:08:59.967994 | orchestrator | Tuesday 17 February 2026 06:08:56 +0000 (0:00:00.811) 0:22:12.096 ****** 2026-02-17 06:08:59.968005 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.968015 | orchestrator | 2026-02-17 06:08:59.968027 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:08:59.968040 | orchestrator | Tuesday 17 February 2026 06:08:57 +0000 (0:00:00.779) 0:22:12.876 ****** 2026-02-17 06:08:59.968057 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.968068 | orchestrator | 2026-02-17 06:08:59.968079 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:08:59.968091 | orchestrator | Tuesday 17 February 2026 06:08:58 +0000 (0:00:00.787) 0:22:13.664 ****** 2026-02-17 06:08:59.968102 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.968112 | orchestrator | 2026-02-17 06:08:59.968123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:08:59.968134 | orchestrator | Tuesday 17 February 2026 06:08:59 +0000 (0:00:00.786) 0:22:14.451 ****** 2026-02-17 06:08:59.968145 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:08:59.968156 | orchestrator | 2026-02-17 06:08:59.968167 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:08:59.968186 | orchestrator | Tuesday 17 February 2026 06:08:59 +0000 (0:00:00.770) 0:22:15.222 ****** 2026-02-17 06:09:49.966595 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.966765 | orchestrator | 2026-02-17 06:09:49.966785 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:09:49.966799 | orchestrator | Tuesday 17 February 2026 06:09:00 +0000 (0:00:00.843) 0:22:16.065 ****** 2026-02-17 06:09:49.966811 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:09:49.966823 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:09:49.966834 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:09:49.966845 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.966856 | orchestrator | 2026-02-17 06:09:49.966867 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:09:49.966878 | orchestrator | Tuesday 17 February 2026 06:09:02 +0000 (0:00:01.428) 0:22:17.494 ****** 2026-02-17 06:09:49.966889 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:09:49.966900 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:09:49.966911 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:09:49.966922 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.966933 | orchestrator | 2026-02-17 06:09:49.966944 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:09:49.966955 | orchestrator | Tuesday 17 February 2026 06:09:03 +0000 (0:00:01.509) 0:22:19.003 ****** 2026-02-17 06:09:49.966966 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:09:49.966977 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:09:49.966988 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:09:49.966999 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967010 | orchestrator | 2026-02-17 06:09:49.967021 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:09:49.967032 | orchestrator | Tuesday 17 February 2026 06:09:04 +0000 (0:00:01.083) 0:22:20.087 ****** 2026-02-17 06:09:49.967043 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967053 | orchestrator | 2026-02-17 06:09:49.967064 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:09:49.967092 | orchestrator | Tuesday 17 February 2026 06:09:05 +0000 (0:00:00.826) 0:22:20.913 ****** 2026-02-17 06:09:49.967107 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-17 06:09:49.967119 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967132 | orchestrator | 2026-02-17 06:09:49.967145 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:09:49.967157 | orchestrator | Tuesday 17 February 2026 06:09:06 +0000 (0:00:00.888) 0:22:21.802 ****** 2026-02-17 06:09:49.967170 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967183 | orchestrator | 2026-02-17 06:09:49.967195 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:09:49.967208 | orchestrator | Tuesday 17 February 2026 06:09:07 +0000 (0:00:00.777) 0:22:22.580 ****** 2026-02-17 06:09:49.967242 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 06:09:49.967255 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 06:09:49.967269 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 06:09:49.967288 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967307 | orchestrator | 2026-02-17 06:09:49.967327 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-17 06:09:49.967409 | orchestrator | Tuesday 17 February 2026 06:09:08 +0000 (0:00:01.081) 0:22:23.661 ****** 2026-02-17 06:09:49.967430 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967449 | orchestrator | 2026-02-17 06:09:49.967469 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-17 06:09:49.967482 | orchestrator | Tuesday 17 February 2026 06:09:09 +0000 (0:00:00.794) 0:22:24.456 ****** 2026-02-17 06:09:49.967501 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967519 | orchestrator | 2026-02-17 06:09:49.967537 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-17 06:09:49.967556 | orchestrator | Tuesday 17 February 2026 06:09:09 +0000 (0:00:00.781) 0:22:25.238 ****** 2026-02-17 06:09:49.967574 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967594 | orchestrator | 2026-02-17 06:09:49.967612 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-17 06:09:49.967627 | orchestrator | Tuesday 17 February 2026 06:09:10 +0000 (0:00:00.763) 0:22:26.002 ****** 2026-02-17 06:09:49.967645 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:09:49.967664 | orchestrator | 2026-02-17 06:09:49.967682 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-17 06:09:49.967700 | orchestrator | 2026-02-17 06:09:49.967717 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-17 06:09:49.967735 | orchestrator | Tuesday 17 February 2026 06:09:12 +0000 (0:00:01.680) 0:22:27.683 ****** 2026-02-17 06:09:49.967754 | orchestrator | changed: [testbed-node-0] 2026-02-17 06:09:49.967772 | orchestrator | 2026-02-17 06:09:49.967789 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-17 06:09:49.967809 | orchestrator | Tuesday 17 February 2026 06:09:25 +0000 (0:00:13.279) 0:22:40.962 ****** 2026-02-17 06:09:49.967828 | orchestrator | changed: [testbed-node-0] 2026-02-17 06:09:49.967846 | orchestrator | 2026-02-17 06:09:49.967866 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:09:49.967886 | orchestrator | Tuesday 17 February 2026 06:09:28 +0000 (0:00:02.585) 0:22:43.547 ****** 2026-02-17 06:09:49.967904 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-17 06:09:49.967922 | orchestrator | 2026-02-17 06:09:49.967933 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:09:49.967943 | orchestrator | Tuesday 17 February 2026 06:09:29 +0000 (0:00:01.138) 0:22:44.686 ****** 2026-02-17 06:09:49.967954 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:09:49.967965 | orchestrator | 2026-02-17 06:09:49.967976 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:09:49.968008 | orchestrator | Tuesday 17 February 2026 06:09:30 +0000 (0:00:01.510) 0:22:46.197 ****** 2026-02-17 06:09:49.968019 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:09:49.968030 | orchestrator | 2026-02-17 06:09:49.968041 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:09:49.968052 | orchestrator | Tuesday 17 February 2026 06:09:32 +0000 (0:00:01.147) 0:22:47.345 ****** 2026-02-17 06:09:49.968063 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:09:49.968073 | orchestrator | 2026-02-17 06:09:49.968084 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:09:49.968095 | orchestrator | Tuesday 17 February 2026 06:09:33 +0000 (0:00:01.485) 0:22:48.830 ****** 2026-02-17 06:09:49.968106 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:09:49.968117 | orchestrator | 2026-02-17 06:09:49.968127 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:09:49.968149 | orchestrator | Tuesday 17 February 2026 06:09:34 +0000 (0:00:01.207) 0:22:50.038 ****** 2026-02-17 06:09:49.968160 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:09:49.968171 | orchestrator | 2026-02-17 06:09:49.968182 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:09:49.968193 | orchestrator | Tuesday 17 February 2026 06:09:35 +0000 (0:00:01.157) 0:22:51.195 ****** 2026-02-17 06:09:49.968204 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:09:49.968215 | orchestrator | 2026-02-17 06:09:49.968225 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:09:49.968237 | orchestrator | Tuesday 17 February 2026 06:09:37 +0000 (0:00:01.187) 0:22:52.383 ****** 2026-02-17 06:09:49.968248 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:09:49.968259 | orchestrator | 2026-02-17 06:09:49.968270 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:09:49.968280 | orchestrator | Tuesday 17 February 2026 06:09:38 +0000 (0:00:01.144) 0:22:53.528 ****** 2026-02-17 06:09:49.968291 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:09:49.968302 | orchestrator | 2026-02-17 06:09:49.968313 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:09:49.968324 | orchestrator | Tuesday 17 February 2026 06:09:39 +0000 (0:00:01.202) 0:22:54.731 ****** 2026-02-17 06:09:49.968334 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:09:49.968397 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:09:49.968409 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:09:49.968420 | orchestrator | 2026-02-17 06:09:49.968431 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:09:49.968442 | orchestrator | Tuesday 17 February 2026 06:09:41 +0000 (0:00:02.017) 0:22:56.748 ****** 2026-02-17 06:09:49.968453 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:09:49.968464 | orchestrator | 2026-02-17 06:09:49.968475 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:09:49.968485 | orchestrator | Tuesday 17 February 2026 06:09:42 +0000 (0:00:01.267) 0:22:58.016 ****** 2026-02-17 06:09:49.968496 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:09:49.968507 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:09:49.968518 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:09:49.968529 | orchestrator | 2026-02-17 06:09:49.968540 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:09:49.968551 | orchestrator | Tuesday 17 February 2026 06:09:45 +0000 (0:00:02.919) 0:23:00.936 ****** 2026-02-17 06:09:49.968562 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 06:09:49.968573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 06:09:49.968584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 06:09:49.968595 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:09:49.968606 | orchestrator | 2026-02-17 06:09:49.968616 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:09:49.968627 | orchestrator | Tuesday 17 February 2026 06:09:47 +0000 (0:00:01.408) 0:23:02.345 ****** 2026-02-17 06:09:49.968641 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:09:49.968655 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:09:49.968667 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:09:49.968686 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:09:49.968697 | orchestrator | 2026-02-17 06:09:49.968708 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:09:49.968719 | orchestrator | Tuesday 17 February 2026 06:09:48 +0000 (0:00:01.685) 0:23:04.030 ****** 2026-02-17 06:09:49.968740 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:10.208413 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:10.208527 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:10.208547 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.208561 | orchestrator | 2026-02-17 06:10:10.208573 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:10:10.208586 | orchestrator | Tuesday 17 February 2026 06:09:49 +0000 (0:00:01.194) 0:23:05.224 ****** 2026-02-17 06:10:10.208652 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:09:43.342204', 'end': '2026-02-17 06:09:43.398158', 'delta': '0:00:00.055954', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:10:10.208670 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:09:43.939817', 'end': '2026-02-17 06:09:43.970189', 'delta': '0:00:00.030372', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:10:10.208683 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:09:44.455459', 'end': '2026-02-17 06:09:44.504432', 'delta': '0:00:00.048973', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:10:10.208716 | orchestrator | 2026-02-17 06:10:10.208730 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:10:10.208763 | orchestrator | Tuesday 17 February 2026 06:09:51 +0000 (0:00:01.240) 0:23:06.465 ****** 2026-02-17 06:10:10.208780 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:10:10.208797 | orchestrator | 2026-02-17 06:10:10.208813 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:10:10.208830 | orchestrator | Tuesday 17 February 2026 06:09:52 +0000 (0:00:01.279) 0:23:07.745 ****** 2026-02-17 06:10:10.208846 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.208862 | orchestrator | 2026-02-17 06:10:10.208879 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:10:10.208897 | orchestrator | Tuesday 17 February 2026 06:09:53 +0000 (0:00:01.280) 0:23:09.026 ****** 2026-02-17 06:10:10.208912 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:10:10.208930 | orchestrator | 2026-02-17 06:10:10.208947 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:10:10.208967 | orchestrator | Tuesday 17 February 2026 06:09:54 +0000 (0:00:01.164) 0:23:10.190 ****** 2026-02-17 06:10:10.209008 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:10:10.209026 | orchestrator | 2026-02-17 06:10:10.209039 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:10:10.209051 | orchestrator | Tuesday 17 February 2026 06:09:56 +0000 (0:00:02.020) 0:23:12.211 ****** 2026-02-17 06:10:10.209065 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:10:10.209078 | orchestrator | 2026-02-17 06:10:10.209090 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:10:10.209102 | orchestrator | Tuesday 17 February 2026 06:09:58 +0000 (0:00:01.133) 0:23:13.345 ****** 2026-02-17 06:10:10.209114 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209126 | orchestrator | 2026-02-17 06:10:10.209140 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:10:10.209160 | orchestrator | Tuesday 17 February 2026 06:09:59 +0000 (0:00:01.098) 0:23:14.443 ****** 2026-02-17 06:10:10.209180 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209200 | orchestrator | 2026-02-17 06:10:10.209220 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:10:10.209238 | orchestrator | Tuesday 17 February 2026 06:10:00 +0000 (0:00:01.636) 0:23:16.080 ****** 2026-02-17 06:10:10.209251 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209263 | orchestrator | 2026-02-17 06:10:10.209275 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:10:10.209288 | orchestrator | Tuesday 17 February 2026 06:10:01 +0000 (0:00:01.140) 0:23:17.221 ****** 2026-02-17 06:10:10.209301 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209313 | orchestrator | 2026-02-17 06:10:10.209325 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:10:10.209336 | orchestrator | Tuesday 17 February 2026 06:10:03 +0000 (0:00:01.211) 0:23:18.433 ****** 2026-02-17 06:10:10.209347 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209418 | orchestrator | 2026-02-17 06:10:10.209430 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:10:10.209441 | orchestrator | Tuesday 17 February 2026 06:10:04 +0000 (0:00:01.156) 0:23:19.590 ****** 2026-02-17 06:10:10.209452 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209463 | orchestrator | 2026-02-17 06:10:10.209474 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:10:10.209493 | orchestrator | Tuesday 17 February 2026 06:10:05 +0000 (0:00:01.177) 0:23:20.767 ****** 2026-02-17 06:10:10.209517 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209528 | orchestrator | 2026-02-17 06:10:10.209539 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:10:10.209550 | orchestrator | Tuesday 17 February 2026 06:10:06 +0000 (0:00:01.216) 0:23:21.984 ****** 2026-02-17 06:10:10.209560 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209572 | orchestrator | 2026-02-17 06:10:10.209583 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:10:10.209594 | orchestrator | Tuesday 17 February 2026 06:10:07 +0000 (0:00:01.156) 0:23:23.141 ****** 2026-02-17 06:10:10.209604 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:10.209615 | orchestrator | 2026-02-17 06:10:10.209626 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:10:10.209638 | orchestrator | Tuesday 17 February 2026 06:10:09 +0000 (0:00:01.132) 0:23:24.274 ****** 2026-02-17 06:10:10.209650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:10:10.209662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:10:10.209674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:10:10.209686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:10:10.209708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:10:11.457705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:10:11.457802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:10:11.457859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69a38e66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:10:11.457876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:10:11.457887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:10:11.457898 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:11.457910 | orchestrator | 2026-02-17 06:10:11.457921 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:10:11.457932 | orchestrator | Tuesday 17 February 2026 06:10:10 +0000 (0:00:01.194) 0:23:25.468 ****** 2026-02-17 06:10:11.457961 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:11.457986 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:11.457997 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:11.458008 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:11.458077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:11.458088 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:11.458108 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:32.854147 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69a38e66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:32.854266 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:32.854285 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:10:32.854298 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:32.854312 | orchestrator | 2026-02-17 06:10:32.854324 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:10:32.854336 | orchestrator | Tuesday 17 February 2026 06:10:11 +0000 (0:00:01.250) 0:23:26.719 ****** 2026-02-17 06:10:32.854424 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:10:32.854439 | orchestrator | 2026-02-17 06:10:32.854450 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:10:32.854462 | orchestrator | Tuesday 17 February 2026 06:10:13 +0000 (0:00:01.588) 0:23:28.307 ****** 2026-02-17 06:10:32.854473 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:10:32.854484 | orchestrator | 2026-02-17 06:10:32.854495 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:10:32.854523 | orchestrator | Tuesday 17 February 2026 06:10:14 +0000 (0:00:01.113) 0:23:29.421 ****** 2026-02-17 06:10:32.854535 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:10:32.854546 | orchestrator | 2026-02-17 06:10:32.854557 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:10:32.854568 | orchestrator | Tuesday 17 February 2026 06:10:15 +0000 (0:00:01.531) 0:23:30.952 ****** 2026-02-17 06:10:32.854579 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:32.854590 | orchestrator | 2026-02-17 06:10:32.854601 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:10:32.854612 | orchestrator | Tuesday 17 February 2026 06:10:16 +0000 (0:00:01.169) 0:23:32.122 ****** 2026-02-17 06:10:32.854623 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:32.854634 | orchestrator | 2026-02-17 06:10:32.854645 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:10:32.854656 | orchestrator | Tuesday 17 February 2026 06:10:18 +0000 (0:00:01.251) 0:23:33.374 ****** 2026-02-17 06:10:32.854667 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:32.854678 | orchestrator | 2026-02-17 06:10:32.854689 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:10:32.854707 | orchestrator | Tuesday 17 February 2026 06:10:19 +0000 (0:00:01.141) 0:23:34.515 ****** 2026-02-17 06:10:32.854718 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:10:32.854730 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 06:10:32.854741 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 06:10:32.854751 | orchestrator | 2026-02-17 06:10:32.854762 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:10:32.854773 | orchestrator | Tuesday 17 February 2026 06:10:20 +0000 (0:00:01.721) 0:23:36.236 ****** 2026-02-17 06:10:32.854785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 06:10:32.854796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 06:10:32.854807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 06:10:32.854818 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:32.854828 | orchestrator | 2026-02-17 06:10:32.854840 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:10:32.854851 | orchestrator | Tuesday 17 February 2026 06:10:22 +0000 (0:00:01.169) 0:23:37.406 ****** 2026-02-17 06:10:32.854862 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:32.854873 | orchestrator | 2026-02-17 06:10:32.854884 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:10:32.854895 | orchestrator | Tuesday 17 February 2026 06:10:23 +0000 (0:00:01.187) 0:23:38.594 ****** 2026-02-17 06:10:32.854906 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:10:32.854917 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:10:32.854929 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:10:32.854940 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:10:32.854951 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:10:32.854961 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:10:32.854973 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:10:32.854993 | orchestrator | 2026-02-17 06:10:32.855004 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:10:32.855015 | orchestrator | Tuesday 17 February 2026 06:10:25 +0000 (0:00:01.963) 0:23:40.558 ****** 2026-02-17 06:10:32.855026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:10:32.855037 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:10:32.855048 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:10:32.855058 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:10:32.855070 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:10:32.855081 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:10:32.855091 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:10:32.855102 | orchestrator | 2026-02-17 06:10:32.855113 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:10:32.855124 | orchestrator | Tuesday 17 February 2026 06:10:27 +0000 (0:00:02.612) 0:23:43.171 ****** 2026-02-17 06:10:32.855135 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-17 06:10:32.855147 | orchestrator | 2026-02-17 06:10:32.855159 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:10:32.855170 | orchestrator | Tuesday 17 February 2026 06:10:29 +0000 (0:00:01.126) 0:23:44.297 ****** 2026-02-17 06:10:32.855181 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-17 06:10:32.855192 | orchestrator | 2026-02-17 06:10:32.855202 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:10:32.855214 | orchestrator | Tuesday 17 February 2026 06:10:30 +0000 (0:00:01.165) 0:23:45.463 ****** 2026-02-17 06:10:32.855225 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:10:32.855236 | orchestrator | 2026-02-17 06:10:32.855247 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:10:32.855258 | orchestrator | Tuesday 17 February 2026 06:10:31 +0000 (0:00:01.533) 0:23:46.996 ****** 2026-02-17 06:10:32.855270 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:10:32.855281 | orchestrator | 2026-02-17 06:10:32.855297 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:11:23.741189 | orchestrator | Tuesday 17 February 2026 06:10:32 +0000 (0:00:01.114) 0:23:48.110 ****** 2026-02-17 06:11:23.741305 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741323 | orchestrator | 2026-02-17 06:11:23.741336 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:11:23.741347 | orchestrator | Tuesday 17 February 2026 06:10:33 +0000 (0:00:01.136) 0:23:49.247 ****** 2026-02-17 06:11:23.741359 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741370 | orchestrator | 2026-02-17 06:11:23.741381 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:11:23.741393 | orchestrator | Tuesday 17 February 2026 06:10:35 +0000 (0:00:01.137) 0:23:50.385 ****** 2026-02-17 06:11:23.741464 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.741478 | orchestrator | 2026-02-17 06:11:23.741490 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:11:23.741502 | orchestrator | Tuesday 17 February 2026 06:10:36 +0000 (0:00:01.533) 0:23:51.918 ****** 2026-02-17 06:11:23.741513 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741524 | orchestrator | 2026-02-17 06:11:23.741551 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:11:23.741562 | orchestrator | Tuesday 17 February 2026 06:10:37 +0000 (0:00:01.137) 0:23:53.055 ****** 2026-02-17 06:11:23.741574 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741585 | orchestrator | 2026-02-17 06:11:23.741596 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:11:23.741629 | orchestrator | Tuesday 17 February 2026 06:10:38 +0000 (0:00:01.150) 0:23:54.206 ****** 2026-02-17 06:11:23.741641 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.741652 | orchestrator | 2026-02-17 06:11:23.741663 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:11:23.741674 | orchestrator | Tuesday 17 February 2026 06:10:40 +0000 (0:00:01.536) 0:23:55.743 ****** 2026-02-17 06:11:23.741685 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.741696 | orchestrator | 2026-02-17 06:11:23.741707 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:11:23.741718 | orchestrator | Tuesday 17 February 2026 06:10:42 +0000 (0:00:01.570) 0:23:57.313 ****** 2026-02-17 06:11:23.741731 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741743 | orchestrator | 2026-02-17 06:11:23.741756 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:11:23.741768 | orchestrator | Tuesday 17 February 2026 06:10:43 +0000 (0:00:01.121) 0:23:58.435 ****** 2026-02-17 06:11:23.741780 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.741793 | orchestrator | 2026-02-17 06:11:23.741805 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:11:23.741818 | orchestrator | Tuesday 17 February 2026 06:10:44 +0000 (0:00:01.202) 0:23:59.638 ****** 2026-02-17 06:11:23.741830 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741842 | orchestrator | 2026-02-17 06:11:23.741855 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:11:23.741867 | orchestrator | Tuesday 17 February 2026 06:10:45 +0000 (0:00:01.202) 0:24:00.840 ****** 2026-02-17 06:11:23.741879 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741892 | orchestrator | 2026-02-17 06:11:23.741904 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:11:23.741917 | orchestrator | Tuesday 17 February 2026 06:10:46 +0000 (0:00:01.116) 0:24:01.956 ****** 2026-02-17 06:11:23.741929 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741941 | orchestrator | 2026-02-17 06:11:23.741954 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:11:23.741966 | orchestrator | Tuesday 17 February 2026 06:10:47 +0000 (0:00:01.177) 0:24:03.134 ****** 2026-02-17 06:11:23.741978 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.741991 | orchestrator | 2026-02-17 06:11:23.742003 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:11:23.742071 | orchestrator | Tuesday 17 February 2026 06:10:49 +0000 (0:00:01.209) 0:24:04.343 ****** 2026-02-17 06:11:23.742085 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742098 | orchestrator | 2026-02-17 06:11:23.742109 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:11:23.742120 | orchestrator | Tuesday 17 February 2026 06:10:50 +0000 (0:00:01.170) 0:24:05.514 ****** 2026-02-17 06:11:23.742130 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.742141 | orchestrator | 2026-02-17 06:11:23.742152 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:11:23.742163 | orchestrator | Tuesday 17 February 2026 06:10:51 +0000 (0:00:01.238) 0:24:06.753 ****** 2026-02-17 06:11:23.742174 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.742185 | orchestrator | 2026-02-17 06:11:23.742196 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:11:23.742207 | orchestrator | Tuesday 17 February 2026 06:10:52 +0000 (0:00:01.182) 0:24:07.935 ****** 2026-02-17 06:11:23.742217 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.742228 | orchestrator | 2026-02-17 06:11:23.742239 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:11:23.742250 | orchestrator | Tuesday 17 February 2026 06:10:53 +0000 (0:00:01.168) 0:24:09.104 ****** 2026-02-17 06:11:23.742261 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742272 | orchestrator | 2026-02-17 06:11:23.742283 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:11:23.742303 | orchestrator | Tuesday 17 February 2026 06:10:54 +0000 (0:00:01.155) 0:24:10.259 ****** 2026-02-17 06:11:23.742315 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742325 | orchestrator | 2026-02-17 06:11:23.742336 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:11:23.742347 | orchestrator | Tuesday 17 February 2026 06:10:56 +0000 (0:00:01.128) 0:24:11.388 ****** 2026-02-17 06:11:23.742358 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742369 | orchestrator | 2026-02-17 06:11:23.742379 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:11:23.742390 | orchestrator | Tuesday 17 February 2026 06:10:57 +0000 (0:00:01.121) 0:24:12.510 ****** 2026-02-17 06:11:23.742465 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742478 | orchestrator | 2026-02-17 06:11:23.742489 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:11:23.742500 | orchestrator | Tuesday 17 February 2026 06:10:58 +0000 (0:00:01.181) 0:24:13.692 ****** 2026-02-17 06:11:23.742511 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742522 | orchestrator | 2026-02-17 06:11:23.742533 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:11:23.742544 | orchestrator | Tuesday 17 February 2026 06:10:59 +0000 (0:00:01.125) 0:24:14.817 ****** 2026-02-17 06:11:23.742555 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742566 | orchestrator | 2026-02-17 06:11:23.742577 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:11:23.742588 | orchestrator | Tuesday 17 February 2026 06:11:00 +0000 (0:00:01.189) 0:24:16.006 ****** 2026-02-17 06:11:23.742599 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742610 | orchestrator | 2026-02-17 06:11:23.742621 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:11:23.742639 | orchestrator | Tuesday 17 February 2026 06:11:01 +0000 (0:00:01.129) 0:24:17.136 ****** 2026-02-17 06:11:23.742651 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742662 | orchestrator | 2026-02-17 06:11:23.742673 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:11:23.742684 | orchestrator | Tuesday 17 February 2026 06:11:03 +0000 (0:00:01.193) 0:24:18.330 ****** 2026-02-17 06:11:23.742695 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742705 | orchestrator | 2026-02-17 06:11:23.742716 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:11:23.742727 | orchestrator | Tuesday 17 February 2026 06:11:04 +0000 (0:00:01.138) 0:24:19.468 ****** 2026-02-17 06:11:23.742738 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742749 | orchestrator | 2026-02-17 06:11:23.742760 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:11:23.742771 | orchestrator | Tuesday 17 February 2026 06:11:05 +0000 (0:00:01.148) 0:24:20.617 ****** 2026-02-17 06:11:23.742782 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742793 | orchestrator | 2026-02-17 06:11:23.742804 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:11:23.742815 | orchestrator | Tuesday 17 February 2026 06:11:06 +0000 (0:00:01.117) 0:24:21.734 ****** 2026-02-17 06:11:23.742826 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.742837 | orchestrator | 2026-02-17 06:11:23.742848 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:11:23.742859 | orchestrator | Tuesday 17 February 2026 06:11:07 +0000 (0:00:01.114) 0:24:22.849 ****** 2026-02-17 06:11:23.742870 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.742881 | orchestrator | 2026-02-17 06:11:23.742892 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:11:23.742902 | orchestrator | Tuesday 17 February 2026 06:11:09 +0000 (0:00:02.078) 0:24:24.928 ****** 2026-02-17 06:11:23.742913 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.742924 | orchestrator | 2026-02-17 06:11:23.742935 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:11:23.742954 | orchestrator | Tuesday 17 February 2026 06:11:12 +0000 (0:00:02.663) 0:24:27.592 ****** 2026-02-17 06:11:23.742965 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-17 06:11:23.742977 | orchestrator | 2026-02-17 06:11:23.742988 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:11:23.743000 | orchestrator | Tuesday 17 February 2026 06:11:13 +0000 (0:00:01.139) 0:24:28.731 ****** 2026-02-17 06:11:23.743010 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.743021 | orchestrator | 2026-02-17 06:11:23.743032 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:11:23.743043 | orchestrator | Tuesday 17 February 2026 06:11:14 +0000 (0:00:01.137) 0:24:29.868 ****** 2026-02-17 06:11:23.743054 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.743065 | orchestrator | 2026-02-17 06:11:23.743076 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:11:23.743087 | orchestrator | Tuesday 17 February 2026 06:11:15 +0000 (0:00:01.162) 0:24:31.031 ****** 2026-02-17 06:11:23.743098 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:11:23.743109 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:11:23.743119 | orchestrator | 2026-02-17 06:11:23.743130 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:11:23.743141 | orchestrator | Tuesday 17 February 2026 06:11:17 +0000 (0:00:01.812) 0:24:32.844 ****** 2026-02-17 06:11:23.743153 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:11:23.743163 | orchestrator | 2026-02-17 06:11:23.743174 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:11:23.743185 | orchestrator | Tuesday 17 February 2026 06:11:19 +0000 (0:00:01.533) 0:24:34.378 ****** 2026-02-17 06:11:23.743196 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.743207 | orchestrator | 2026-02-17 06:11:23.743218 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:11:23.743229 | orchestrator | Tuesday 17 February 2026 06:11:20 +0000 (0:00:01.134) 0:24:35.512 ****** 2026-02-17 06:11:23.743240 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.743251 | orchestrator | 2026-02-17 06:11:23.743262 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:11:23.743273 | orchestrator | Tuesday 17 February 2026 06:11:21 +0000 (0:00:01.135) 0:24:36.647 ****** 2026-02-17 06:11:23.743284 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:11:23.743295 | orchestrator | 2026-02-17 06:11:23.743305 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:11:23.743316 | orchestrator | Tuesday 17 February 2026 06:11:22 +0000 (0:00:01.105) 0:24:37.753 ****** 2026-02-17 06:11:23.743328 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-17 06:11:23.743339 | orchestrator | 2026-02-17 06:11:23.743356 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:12:11.191710 | orchestrator | Tuesday 17 February 2026 06:11:23 +0000 (0:00:01.242) 0:24:38.996 ****** 2026-02-17 06:12:11.191828 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:12:11.191844 | orchestrator | 2026-02-17 06:12:11.191858 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:12:11.191870 | orchestrator | Tuesday 17 February 2026 06:11:25 +0000 (0:00:01.724) 0:24:40.720 ****** 2026-02-17 06:12:11.191881 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:12:11.191893 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:12:11.191904 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:12:11.191915 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.191926 | orchestrator | 2026-02-17 06:12:11.191961 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:12:11.191986 | orchestrator | Tuesday 17 February 2026 06:11:26 +0000 (0:00:01.181) 0:24:41.902 ****** 2026-02-17 06:12:11.191997 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192008 | orchestrator | 2026-02-17 06:12:11.192020 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:12:11.192031 | orchestrator | Tuesday 17 February 2026 06:11:27 +0000 (0:00:01.143) 0:24:43.046 ****** 2026-02-17 06:12:11.192042 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192053 | orchestrator | 2026-02-17 06:12:11.192065 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:12:11.192076 | orchestrator | Tuesday 17 February 2026 06:11:28 +0000 (0:00:01.198) 0:24:44.244 ****** 2026-02-17 06:12:11.192087 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192098 | orchestrator | 2026-02-17 06:12:11.192109 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:12:11.192119 | orchestrator | Tuesday 17 February 2026 06:11:30 +0000 (0:00:01.188) 0:24:45.433 ****** 2026-02-17 06:12:11.192130 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192141 | orchestrator | 2026-02-17 06:12:11.192152 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:12:11.192163 | orchestrator | Tuesday 17 February 2026 06:11:31 +0000 (0:00:01.205) 0:24:46.638 ****** 2026-02-17 06:12:11.192174 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192185 | orchestrator | 2026-02-17 06:12:11.192196 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:12:11.192207 | orchestrator | Tuesday 17 February 2026 06:11:32 +0000 (0:00:01.177) 0:24:47.815 ****** 2026-02-17 06:12:11.192218 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:12:11.192229 | orchestrator | 2026-02-17 06:12:11.192240 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:12:11.192253 | orchestrator | Tuesday 17 February 2026 06:11:35 +0000 (0:00:02.471) 0:24:50.287 ****** 2026-02-17 06:12:11.192266 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:12:11.192278 | orchestrator | 2026-02-17 06:12:11.192291 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:12:11.192303 | orchestrator | Tuesday 17 February 2026 06:11:36 +0000 (0:00:01.179) 0:24:51.467 ****** 2026-02-17 06:12:11.192316 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-17 06:12:11.192329 | orchestrator | 2026-02-17 06:12:11.192341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:12:11.192354 | orchestrator | Tuesday 17 February 2026 06:11:37 +0000 (0:00:01.171) 0:24:52.639 ****** 2026-02-17 06:12:11.192365 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192376 | orchestrator | 2026-02-17 06:12:11.192387 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:12:11.192398 | orchestrator | Tuesday 17 February 2026 06:11:38 +0000 (0:00:01.178) 0:24:53.817 ****** 2026-02-17 06:12:11.192411 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192478 | orchestrator | 2026-02-17 06:12:11.192500 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:12:11.192517 | orchestrator | Tuesday 17 February 2026 06:11:39 +0000 (0:00:01.142) 0:24:54.959 ****** 2026-02-17 06:12:11.192537 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192548 | orchestrator | 2026-02-17 06:12:11.192560 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:12:11.192570 | orchestrator | Tuesday 17 February 2026 06:11:40 +0000 (0:00:01.249) 0:24:56.209 ****** 2026-02-17 06:12:11.192582 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192592 | orchestrator | 2026-02-17 06:12:11.192603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:12:11.192615 | orchestrator | Tuesday 17 February 2026 06:11:42 +0000 (0:00:01.177) 0:24:57.387 ****** 2026-02-17 06:12:11.192626 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192647 | orchestrator | 2026-02-17 06:12:11.192658 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:12:11.192669 | orchestrator | Tuesday 17 February 2026 06:11:43 +0000 (0:00:01.131) 0:24:58.518 ****** 2026-02-17 06:12:11.192680 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192691 | orchestrator | 2026-02-17 06:12:11.192702 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:12:11.192713 | orchestrator | Tuesday 17 February 2026 06:11:44 +0000 (0:00:01.216) 0:24:59.735 ****** 2026-02-17 06:12:11.192724 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192735 | orchestrator | 2026-02-17 06:12:11.192746 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:12:11.192757 | orchestrator | Tuesday 17 February 2026 06:11:45 +0000 (0:00:01.133) 0:25:00.869 ****** 2026-02-17 06:12:11.192768 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.192779 | orchestrator | 2026-02-17 06:12:11.192789 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:12:11.192800 | orchestrator | Tuesday 17 February 2026 06:11:46 +0000 (0:00:01.178) 0:25:02.048 ****** 2026-02-17 06:12:11.192811 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:12:11.192822 | orchestrator | 2026-02-17 06:12:11.192850 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:12:11.192862 | orchestrator | Tuesday 17 February 2026 06:11:47 +0000 (0:00:01.195) 0:25:03.243 ****** 2026-02-17 06:12:11.192873 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-17 06:12:11.192885 | orchestrator | 2026-02-17 06:12:11.192897 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:12:11.192908 | orchestrator | Tuesday 17 February 2026 06:11:49 +0000 (0:00:01.274) 0:25:04.518 ****** 2026-02-17 06:12:11.192919 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-17 06:12:11.192930 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-17 06:12:11.192941 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-17 06:12:11.192952 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-17 06:12:11.192963 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-17 06:12:11.192979 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-17 06:12:11.192991 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-17 06:12:11.193002 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:12:11.193013 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:12:11.193024 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:12:11.193035 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:12:11.193046 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:12:11.193057 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:12:11.193191 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:12:11.193204 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-17 06:12:11.193215 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-17 06:12:11.193226 | orchestrator | 2026-02-17 06:12:11.193237 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:12:11.193248 | orchestrator | Tuesday 17 February 2026 06:11:55 +0000 (0:00:06.740) 0:25:11.258 ****** 2026-02-17 06:12:11.193259 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193270 | orchestrator | 2026-02-17 06:12:11.193281 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:12:11.193292 | orchestrator | Tuesday 17 February 2026 06:11:57 +0000 (0:00:01.137) 0:25:12.396 ****** 2026-02-17 06:12:11.193302 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193313 | orchestrator | 2026-02-17 06:12:11.193324 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:12:11.193343 | orchestrator | Tuesday 17 February 2026 06:11:58 +0000 (0:00:01.149) 0:25:13.545 ****** 2026-02-17 06:12:11.193354 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193365 | orchestrator | 2026-02-17 06:12:11.193376 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:12:11.193387 | orchestrator | Tuesday 17 February 2026 06:11:59 +0000 (0:00:01.157) 0:25:14.703 ****** 2026-02-17 06:12:11.193398 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193408 | orchestrator | 2026-02-17 06:12:11.193419 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:12:11.193452 | orchestrator | Tuesday 17 February 2026 06:12:00 +0000 (0:00:01.120) 0:25:15.824 ****** 2026-02-17 06:12:11.193464 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193475 | orchestrator | 2026-02-17 06:12:11.193486 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:12:11.193497 | orchestrator | Tuesday 17 February 2026 06:12:01 +0000 (0:00:01.128) 0:25:16.952 ****** 2026-02-17 06:12:11.193508 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193519 | orchestrator | 2026-02-17 06:12:11.193530 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:12:11.193541 | orchestrator | Tuesday 17 February 2026 06:12:02 +0000 (0:00:01.272) 0:25:18.224 ****** 2026-02-17 06:12:11.193552 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193563 | orchestrator | 2026-02-17 06:12:11.193574 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:12:11.193585 | orchestrator | Tuesday 17 February 2026 06:12:04 +0000 (0:00:01.131) 0:25:19.356 ****** 2026-02-17 06:12:11.193596 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193607 | orchestrator | 2026-02-17 06:12:11.193618 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:12:11.193629 | orchestrator | Tuesday 17 February 2026 06:12:05 +0000 (0:00:01.209) 0:25:20.566 ****** 2026-02-17 06:12:11.193640 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193651 | orchestrator | 2026-02-17 06:12:11.193662 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:12:11.193673 | orchestrator | Tuesday 17 February 2026 06:12:06 +0000 (0:00:01.149) 0:25:21.715 ****** 2026-02-17 06:12:11.193684 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193695 | orchestrator | 2026-02-17 06:12:11.193705 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:12:11.193717 | orchestrator | Tuesday 17 February 2026 06:12:07 +0000 (0:00:01.134) 0:25:22.849 ****** 2026-02-17 06:12:11.193728 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193738 | orchestrator | 2026-02-17 06:12:11.193749 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:12:11.193761 | orchestrator | Tuesday 17 February 2026 06:12:08 +0000 (0:00:01.226) 0:25:24.076 ****** 2026-02-17 06:12:11.193772 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193783 | orchestrator | 2026-02-17 06:12:11.193793 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:12:11.193805 | orchestrator | Tuesday 17 February 2026 06:12:09 +0000 (0:00:01.137) 0:25:25.214 ****** 2026-02-17 06:12:11.193816 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:12:11.193827 | orchestrator | 2026-02-17 06:12:11.193848 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:13:11.458084 | orchestrator | Tuesday 17 February 2026 06:12:11 +0000 (0:00:01.233) 0:25:26.447 ****** 2026-02-17 06:13:11.458230 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458247 | orchestrator | 2026-02-17 06:13:11.458260 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:13:11.458272 | orchestrator | Tuesday 17 February 2026 06:12:12 +0000 (0:00:01.111) 0:25:27.559 ****** 2026-02-17 06:13:11.458311 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458323 | orchestrator | 2026-02-17 06:13:11.458334 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:13:11.458346 | orchestrator | Tuesday 17 February 2026 06:12:13 +0000 (0:00:01.247) 0:25:28.807 ****** 2026-02-17 06:13:11.458357 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458368 | orchestrator | 2026-02-17 06:13:11.458379 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:13:11.458408 | orchestrator | Tuesday 17 February 2026 06:12:14 +0000 (0:00:01.186) 0:25:29.994 ****** 2026-02-17 06:13:11.458419 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458430 | orchestrator | 2026-02-17 06:13:11.458442 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:13:11.458455 | orchestrator | Tuesday 17 February 2026 06:12:15 +0000 (0:00:01.234) 0:25:31.228 ****** 2026-02-17 06:13:11.458493 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458504 | orchestrator | 2026-02-17 06:13:11.458515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:13:11.458526 | orchestrator | Tuesday 17 February 2026 06:12:17 +0000 (0:00:01.176) 0:25:32.405 ****** 2026-02-17 06:13:11.458539 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458552 | orchestrator | 2026-02-17 06:13:11.458564 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:13:11.458576 | orchestrator | Tuesday 17 February 2026 06:12:18 +0000 (0:00:01.189) 0:25:33.594 ****** 2026-02-17 06:13:11.458589 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458601 | orchestrator | 2026-02-17 06:13:11.458614 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:13:11.458626 | orchestrator | Tuesday 17 February 2026 06:12:19 +0000 (0:00:01.120) 0:25:34.715 ****** 2026-02-17 06:13:11.458638 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458650 | orchestrator | 2026-02-17 06:13:11.458663 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:13:11.458677 | orchestrator | Tuesday 17 February 2026 06:12:20 +0000 (0:00:01.164) 0:25:35.880 ****** 2026-02-17 06:13:11.458690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 06:13:11.458702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 06:13:11.458715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 06:13:11.458727 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458739 | orchestrator | 2026-02-17 06:13:11.458752 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:13:11.458765 | orchestrator | Tuesday 17 February 2026 06:12:22 +0000 (0:00:01.795) 0:25:37.675 ****** 2026-02-17 06:13:11.458776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 06:13:11.458789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 06:13:11.458801 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 06:13:11.458813 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458825 | orchestrator | 2026-02-17 06:13:11.458838 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:13:11.458850 | orchestrator | Tuesday 17 February 2026 06:12:24 +0000 (0:00:01.810) 0:25:39.486 ****** 2026-02-17 06:13:11.458862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-17 06:13:11.458874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 06:13:11.458887 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 06:13:11.458899 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458911 | orchestrator | 2026-02-17 06:13:11.458923 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:13:11.458934 | orchestrator | Tuesday 17 February 2026 06:12:26 +0000 (0:00:01.839) 0:25:41.325 ****** 2026-02-17 06:13:11.458945 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.458964 | orchestrator | 2026-02-17 06:13:11.458975 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:13:11.458986 | orchestrator | Tuesday 17 February 2026 06:12:27 +0000 (0:00:01.276) 0:25:42.601 ****** 2026-02-17 06:13:11.458998 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-17 06:13:11.459009 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.459020 | orchestrator | 2026-02-17 06:13:11.459031 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:13:11.459042 | orchestrator | Tuesday 17 February 2026 06:12:28 +0000 (0:00:01.350) 0:25:43.952 ****** 2026-02-17 06:13:11.459053 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:13:11.459064 | orchestrator | 2026-02-17 06:13:11.459075 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:13:11.459086 | orchestrator | Tuesday 17 February 2026 06:12:30 +0000 (0:00:01.887) 0:25:45.840 ****** 2026-02-17 06:13:11.459097 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:13:11.459108 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:13:11.459120 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:13:11.459131 | orchestrator | 2026-02-17 06:13:11.459142 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-17 06:13:11.459153 | orchestrator | Tuesday 17 February 2026 06:12:32 +0000 (0:00:01.729) 0:25:47.569 ****** 2026-02-17 06:13:11.459164 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-17 06:13:11.459175 | orchestrator | 2026-02-17 06:13:11.459205 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-17 06:13:11.459217 | orchestrator | Tuesday 17 February 2026 06:12:33 +0000 (0:00:01.496) 0:25:49.065 ****** 2026-02-17 06:13:11.459228 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:13:11.459239 | orchestrator | 2026-02-17 06:13:11.459250 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-17 06:13:11.459261 | orchestrator | Tuesday 17 February 2026 06:12:35 +0000 (0:00:01.507) 0:25:50.573 ****** 2026-02-17 06:13:11.459271 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.459282 | orchestrator | 2026-02-17 06:13:11.459293 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-17 06:13:11.459304 | orchestrator | Tuesday 17 February 2026 06:12:36 +0000 (0:00:01.177) 0:25:51.750 ****** 2026-02-17 06:13:11.459316 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-17 06:13:11.459327 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-17 06:13:11.459338 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-17 06:13:11.459349 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-17 06:13:11.459360 | orchestrator | 2026-02-17 06:13:11.459371 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-17 06:13:11.459382 | orchestrator | Tuesday 17 February 2026 06:12:43 +0000 (0:00:07.374) 0:25:59.125 ****** 2026-02-17 06:13:11.459393 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:13:11.459404 | orchestrator | 2026-02-17 06:13:11.459415 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-17 06:13:11.459426 | orchestrator | Tuesday 17 February 2026 06:12:45 +0000 (0:00:01.233) 0:26:00.359 ****** 2026-02-17 06:13:11.459436 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-17 06:13:11.459447 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 06:13:11.459458 | orchestrator | 2026-02-17 06:13:11.459491 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:13:11.459502 | orchestrator | Tuesday 17 February 2026 06:12:48 +0000 (0:00:03.558) 0:26:03.918 ****** 2026-02-17 06:13:11.459513 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-17 06:13:11.459524 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-17 06:13:11.459535 | orchestrator | 2026-02-17 06:13:11.459546 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-17 06:13:11.459564 | orchestrator | Tuesday 17 February 2026 06:12:50 +0000 (0:00:02.122) 0:26:06.040 ****** 2026-02-17 06:13:11.459588 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:13:11.459599 | orchestrator | 2026-02-17 06:13:11.459622 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-17 06:13:11.459724 | orchestrator | Tuesday 17 February 2026 06:12:52 +0000 (0:00:01.558) 0:26:07.599 ****** 2026-02-17 06:13:11.459744 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.459755 | orchestrator | 2026-02-17 06:13:11.459766 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-17 06:13:11.459777 | orchestrator | Tuesday 17 February 2026 06:12:53 +0000 (0:00:01.165) 0:26:08.764 ****** 2026-02-17 06:13:11.459788 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.459799 | orchestrator | 2026-02-17 06:13:11.459810 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-17 06:13:11.459822 | orchestrator | Tuesday 17 February 2026 06:12:54 +0000 (0:00:01.135) 0:26:09.900 ****** 2026-02-17 06:13:11.459833 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-17 06:13:11.459844 | orchestrator | 2026-02-17 06:13:11.459855 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-17 06:13:11.459866 | orchestrator | Tuesday 17 February 2026 06:12:56 +0000 (0:00:01.487) 0:26:11.388 ****** 2026-02-17 06:13:11.459877 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.459887 | orchestrator | 2026-02-17 06:13:11.459898 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-17 06:13:11.459915 | orchestrator | Tuesday 17 February 2026 06:12:57 +0000 (0:00:01.131) 0:26:12.520 ****** 2026-02-17 06:13:11.459927 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.459937 | orchestrator | 2026-02-17 06:13:11.459948 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-17 06:13:11.459959 | orchestrator | Tuesday 17 February 2026 06:12:58 +0000 (0:00:01.180) 0:26:13.700 ****** 2026-02-17 06:13:11.459970 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-17 06:13:11.459981 | orchestrator | 2026-02-17 06:13:11.459991 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-17 06:13:11.460002 | orchestrator | Tuesday 17 February 2026 06:12:59 +0000 (0:00:01.489) 0:26:15.190 ****** 2026-02-17 06:13:11.460013 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:13:11.460024 | orchestrator | 2026-02-17 06:13:11.460035 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-17 06:13:11.460046 | orchestrator | Tuesday 17 February 2026 06:13:01 +0000 (0:00:02.054) 0:26:17.244 ****** 2026-02-17 06:13:11.460057 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:13:11.460067 | orchestrator | 2026-02-17 06:13:11.460078 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-17 06:13:11.460089 | orchestrator | Tuesday 17 February 2026 06:13:03 +0000 (0:00:02.021) 0:26:19.266 ****** 2026-02-17 06:13:11.460101 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:13:11.460111 | orchestrator | 2026-02-17 06:13:11.460122 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-17 06:13:11.460133 | orchestrator | Tuesday 17 February 2026 06:13:06 +0000 (0:00:02.472) 0:26:21.739 ****** 2026-02-17 06:13:11.460144 | orchestrator | changed: [testbed-node-0] 2026-02-17 06:13:11.460155 | orchestrator | 2026-02-17 06:13:11.460166 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-17 06:13:11.460177 | orchestrator | Tuesday 17 February 2026 06:13:10 +0000 (0:00:04.006) 0:26:25.745 ****** 2026-02-17 06:13:11.460188 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:13:11.460198 | orchestrator | 2026-02-17 06:13:11.460210 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-17 06:13:11.460221 | orchestrator | 2026-02-17 06:13:11.460240 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-17 06:13:49.216859 | orchestrator | Tuesday 17 February 2026 06:13:11 +0000 (0:00:00.968) 0:26:26.714 ****** 2026-02-17 06:13:49.216998 | orchestrator | changed: [testbed-node-1] 2026-02-17 06:13:49.217016 | orchestrator | 2026-02-17 06:13:49.217029 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-17 06:13:49.217041 | orchestrator | Tuesday 17 February 2026 06:13:23 +0000 (0:00:12.523) 0:26:39.238 ****** 2026-02-17 06:13:49.217052 | orchestrator | changed: [testbed-node-1] 2026-02-17 06:13:49.217064 | orchestrator | 2026-02-17 06:13:49.217866 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:13:49.217900 | orchestrator | Tuesday 17 February 2026 06:13:26 +0000 (0:00:02.098) 0:26:41.337 ****** 2026-02-17 06:13:49.217919 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-17 06:13:49.217938 | orchestrator | 2026-02-17 06:13:49.217978 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:13:49.217998 | orchestrator | Tuesday 17 February 2026 06:13:27 +0000 (0:00:01.131) 0:26:42.468 ****** 2026-02-17 06:13:49.218070 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:13:49.218092 | orchestrator | 2026-02-17 06:13:49.218109 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:13:49.218126 | orchestrator | Tuesday 17 February 2026 06:13:28 +0000 (0:00:01.509) 0:26:43.978 ****** 2026-02-17 06:13:49.218143 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:13:49.218169 | orchestrator | 2026-02-17 06:13:49.218187 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:13:49.218205 | orchestrator | Tuesday 17 February 2026 06:13:29 +0000 (0:00:01.142) 0:26:45.120 ****** 2026-02-17 06:13:49.218225 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:13:49.218246 | orchestrator | 2026-02-17 06:13:49.218266 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:13:49.218286 | orchestrator | Tuesday 17 February 2026 06:13:31 +0000 (0:00:01.607) 0:26:46.727 ****** 2026-02-17 06:13:49.218307 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:13:49.218327 | orchestrator | 2026-02-17 06:13:49.218346 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:13:49.218367 | orchestrator | Tuesday 17 February 2026 06:13:32 +0000 (0:00:01.169) 0:26:47.897 ****** 2026-02-17 06:13:49.218387 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:13:49.218405 | orchestrator | 2026-02-17 06:13:49.218424 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:13:49.218446 | orchestrator | Tuesday 17 February 2026 06:13:33 +0000 (0:00:01.254) 0:26:49.151 ****** 2026-02-17 06:13:49.218467 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:13:49.218479 | orchestrator | 2026-02-17 06:13:49.218552 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:13:49.218573 | orchestrator | Tuesday 17 February 2026 06:13:35 +0000 (0:00:01.255) 0:26:50.407 ****** 2026-02-17 06:13:49.218592 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:13:49.218612 | orchestrator | 2026-02-17 06:13:49.218631 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:13:49.218648 | orchestrator | Tuesday 17 February 2026 06:13:36 +0000 (0:00:01.151) 0:26:51.558 ****** 2026-02-17 06:13:49.218666 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:13:49.218683 | orchestrator | 2026-02-17 06:13:49.218702 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:13:49.218720 | orchestrator | Tuesday 17 February 2026 06:13:37 +0000 (0:00:01.136) 0:26:52.695 ****** 2026-02-17 06:13:49.218739 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:13:49.218757 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 06:13:49.218774 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:13:49.218795 | orchestrator | 2026-02-17 06:13:49.218812 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:13:49.218832 | orchestrator | Tuesday 17 February 2026 06:13:39 +0000 (0:00:01.760) 0:26:54.456 ****** 2026-02-17 06:13:49.218884 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:13:49.218906 | orchestrator | 2026-02-17 06:13:49.218925 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:13:49.218943 | orchestrator | Tuesday 17 February 2026 06:13:40 +0000 (0:00:01.235) 0:26:55.691 ****** 2026-02-17 06:13:49.218960 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:13:49.218971 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 06:13:49.218983 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:13:49.218993 | orchestrator | 2026-02-17 06:13:49.219004 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:13:49.219015 | orchestrator | Tuesday 17 February 2026 06:13:43 +0000 (0:00:02.907) 0:26:58.599 ****** 2026-02-17 06:13:49.219028 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 06:13:49.219047 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 06:13:49.219064 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 06:13:49.219082 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:13:49.219098 | orchestrator | 2026-02-17 06:13:49.219114 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:13:49.219132 | orchestrator | Tuesday 17 February 2026 06:13:44 +0000 (0:00:01.456) 0:27:00.055 ****** 2026-02-17 06:13:49.219153 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:13:49.219203 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:13:49.219220 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:13:49.219232 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:13:49.219251 | orchestrator | 2026-02-17 06:13:49.219269 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:13:49.219287 | orchestrator | Tuesday 17 February 2026 06:13:46 +0000 (0:00:01.978) 0:27:02.033 ****** 2026-02-17 06:13:49.219319 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:13:49.219340 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:13:49.219358 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:13:49.219374 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:13:49.219406 | orchestrator | 2026-02-17 06:13:49.219423 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:13:49.219440 | orchestrator | Tuesday 17 February 2026 06:13:47 +0000 (0:00:01.218) 0:27:03.252 ****** 2026-02-17 06:13:49.219462 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:13:40.962040', 'end': '2026-02-17 06:13:41.004805', 'delta': '0:00:00.042765', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:13:49.219514 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:13:41.546083', 'end': '2026-02-17 06:13:41.598387', 'delta': '0:00:00.052304', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:13:49.219550 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:13:42.147614', 'end': '2026-02-17 06:13:42.198772', 'delta': '0:00:00.051158', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:14:08.113078 | orchestrator | 2026-02-17 06:14:08.113195 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:14:08.113211 | orchestrator | Tuesday 17 February 2026 06:13:49 +0000 (0:00:01.220) 0:27:04.472 ****** 2026-02-17 06:14:08.113224 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:08.113236 | orchestrator | 2026-02-17 06:14:08.113248 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:14:08.113281 | orchestrator | Tuesday 17 February 2026 06:13:50 +0000 (0:00:01.317) 0:27:05.790 ****** 2026-02-17 06:14:08.113304 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113317 | orchestrator | 2026-02-17 06:14:08.113345 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:14:08.113357 | orchestrator | Tuesday 17 February 2026 06:13:51 +0000 (0:00:01.332) 0:27:07.123 ****** 2026-02-17 06:14:08.113369 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:08.113380 | orchestrator | 2026-02-17 06:14:08.113392 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:14:08.113403 | orchestrator | Tuesday 17 February 2026 06:13:53 +0000 (0:00:01.199) 0:27:08.322 ****** 2026-02-17 06:14:08.113414 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:14:08.113425 | orchestrator | 2026-02-17 06:14:08.113437 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:14:08.113448 | orchestrator | Tuesday 17 February 2026 06:13:54 +0000 (0:00:01.944) 0:27:10.267 ****** 2026-02-17 06:14:08.113480 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:08.113546 | orchestrator | 2026-02-17 06:14:08.113560 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:14:08.113571 | orchestrator | Tuesday 17 February 2026 06:13:56 +0000 (0:00:01.165) 0:27:11.432 ****** 2026-02-17 06:14:08.113582 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113593 | orchestrator | 2026-02-17 06:14:08.113604 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:14:08.113615 | orchestrator | Tuesday 17 February 2026 06:13:57 +0000 (0:00:01.170) 0:27:12.603 ****** 2026-02-17 06:14:08.113626 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113637 | orchestrator | 2026-02-17 06:14:08.113648 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:14:08.113659 | orchestrator | Tuesday 17 February 2026 06:13:58 +0000 (0:00:01.294) 0:27:13.897 ****** 2026-02-17 06:14:08.113670 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113681 | orchestrator | 2026-02-17 06:14:08.113692 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:14:08.113703 | orchestrator | Tuesday 17 February 2026 06:13:59 +0000 (0:00:01.141) 0:27:15.039 ****** 2026-02-17 06:14:08.113714 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113725 | orchestrator | 2026-02-17 06:14:08.113736 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:14:08.113748 | orchestrator | Tuesday 17 February 2026 06:14:00 +0000 (0:00:01.153) 0:27:16.192 ****** 2026-02-17 06:14:08.113759 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113770 | orchestrator | 2026-02-17 06:14:08.113781 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:14:08.113791 | orchestrator | Tuesday 17 February 2026 06:14:02 +0000 (0:00:01.159) 0:27:17.352 ****** 2026-02-17 06:14:08.113802 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113814 | orchestrator | 2026-02-17 06:14:08.113825 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:14:08.113836 | orchestrator | Tuesday 17 February 2026 06:14:03 +0000 (0:00:01.226) 0:27:18.579 ****** 2026-02-17 06:14:08.113847 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113858 | orchestrator | 2026-02-17 06:14:08.113869 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:14:08.113880 | orchestrator | Tuesday 17 February 2026 06:14:04 +0000 (0:00:01.124) 0:27:19.704 ****** 2026-02-17 06:14:08.113891 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113902 | orchestrator | 2026-02-17 06:14:08.113913 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:14:08.113925 | orchestrator | Tuesday 17 February 2026 06:14:05 +0000 (0:00:01.122) 0:27:20.826 ****** 2026-02-17 06:14:08.113936 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:08.113947 | orchestrator | 2026-02-17 06:14:08.113958 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:14:08.113969 | orchestrator | Tuesday 17 February 2026 06:14:06 +0000 (0:00:01.157) 0:27:21.984 ****** 2026-02-17 06:14:08.113982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:14:08.113997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:14:08.114107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:14:08.114131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:14:08.114145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:14:08.114157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:14:08.114168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:14:08.114193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd83a89d3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:14:09.359150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:14:09.359265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:14:09.359281 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:09.359293 | orchestrator | 2026-02-17 06:14:09.359303 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:14:09.359313 | orchestrator | Tuesday 17 February 2026 06:14:08 +0000 (0:00:01.377) 0:27:23.361 ****** 2026-02-17 06:14:09.359324 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:09.359336 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:09.359346 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:09.359356 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:09.359403 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:09.359420 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:09.359430 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:09.359442 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd83a89d3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d83a89d3-91a6-467d-8248-bfeccded0a7a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:09.359469 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:46.664229 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:14:46.664351 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.664369 | orchestrator | 2026-02-17 06:14:46.664382 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:14:46.664395 | orchestrator | Tuesday 17 February 2026 06:14:09 +0000 (0:00:01.260) 0:27:24.622 ****** 2026-02-17 06:14:46.664407 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:46.664418 | orchestrator | 2026-02-17 06:14:46.664430 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:14:46.664441 | orchestrator | Tuesday 17 February 2026 06:14:10 +0000 (0:00:01.494) 0:27:26.117 ****** 2026-02-17 06:14:46.664452 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:46.664463 | orchestrator | 2026-02-17 06:14:46.664474 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:14:46.664485 | orchestrator | Tuesday 17 February 2026 06:14:11 +0000 (0:00:01.139) 0:27:27.256 ****** 2026-02-17 06:14:46.664496 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:46.664507 | orchestrator | 2026-02-17 06:14:46.664588 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:14:46.664600 | orchestrator | Tuesday 17 February 2026 06:14:13 +0000 (0:00:01.509) 0:27:28.766 ****** 2026-02-17 06:14:46.664611 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.664622 | orchestrator | 2026-02-17 06:14:46.664633 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:14:46.664645 | orchestrator | Tuesday 17 February 2026 06:14:14 +0000 (0:00:01.168) 0:27:29.935 ****** 2026-02-17 06:14:46.664656 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.664667 | orchestrator | 2026-02-17 06:14:46.664678 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:14:46.664689 | orchestrator | Tuesday 17 February 2026 06:14:15 +0000 (0:00:01.322) 0:27:31.257 ****** 2026-02-17 06:14:46.664700 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.664711 | orchestrator | 2026-02-17 06:14:46.664723 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:14:46.664735 | orchestrator | Tuesday 17 February 2026 06:14:17 +0000 (0:00:01.217) 0:27:32.475 ****** 2026-02-17 06:14:46.664771 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-17 06:14:46.664785 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 06:14:46.664798 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-17 06:14:46.664810 | orchestrator | 2026-02-17 06:14:46.664823 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:14:46.664834 | orchestrator | Tuesday 17 February 2026 06:14:18 +0000 (0:00:01.678) 0:27:34.153 ****** 2026-02-17 06:14:46.664847 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-17 06:14:46.664859 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-17 06:14:46.664871 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-17 06:14:46.664884 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.664896 | orchestrator | 2026-02-17 06:14:46.664908 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:14:46.664921 | orchestrator | Tuesday 17 February 2026 06:14:20 +0000 (0:00:01.155) 0:27:35.309 ****** 2026-02-17 06:14:46.664933 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.664945 | orchestrator | 2026-02-17 06:14:46.664957 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:14:46.664970 | orchestrator | Tuesday 17 February 2026 06:14:21 +0000 (0:00:01.194) 0:27:36.504 ****** 2026-02-17 06:14:46.664983 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:14:46.664996 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 06:14:46.665008 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:14:46.665021 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:14:46.665034 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:14:46.665046 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:14:46.665059 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:14:46.665071 | orchestrator | 2026-02-17 06:14:46.665082 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:14:46.665093 | orchestrator | Tuesday 17 February 2026 06:14:23 +0000 (0:00:02.184) 0:27:38.688 ****** 2026-02-17 06:14:46.665104 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:14:46.665115 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 06:14:46.665126 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:14:46.665137 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:14:46.665166 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:14:46.665185 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:14:46.665196 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:14:46.665207 | orchestrator | 2026-02-17 06:14:46.665222 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:14:46.665240 | orchestrator | Tuesday 17 February 2026 06:14:25 +0000 (0:00:02.310) 0:27:40.999 ****** 2026-02-17 06:14:46.665258 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-17 06:14:46.665278 | orchestrator | 2026-02-17 06:14:46.665294 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:14:46.665310 | orchestrator | Tuesday 17 February 2026 06:14:26 +0000 (0:00:01.215) 0:27:42.214 ****** 2026-02-17 06:14:46.665327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-17 06:14:46.665343 | orchestrator | 2026-02-17 06:14:46.665358 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:14:46.665388 | orchestrator | Tuesday 17 February 2026 06:14:28 +0000 (0:00:01.173) 0:27:43.388 ****** 2026-02-17 06:14:46.665407 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:46.665426 | orchestrator | 2026-02-17 06:14:46.665447 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:14:46.665466 | orchestrator | Tuesday 17 February 2026 06:14:29 +0000 (0:00:01.517) 0:27:44.906 ****** 2026-02-17 06:14:46.665480 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.665491 | orchestrator | 2026-02-17 06:14:46.665502 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:14:46.665541 | orchestrator | Tuesday 17 February 2026 06:14:30 +0000 (0:00:01.137) 0:27:46.044 ****** 2026-02-17 06:14:46.665557 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.665568 | orchestrator | 2026-02-17 06:14:46.665579 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:14:46.665590 | orchestrator | Tuesday 17 February 2026 06:14:31 +0000 (0:00:01.152) 0:27:47.197 ****** 2026-02-17 06:14:46.665601 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.665612 | orchestrator | 2026-02-17 06:14:46.665623 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:14:46.665634 | orchestrator | Tuesday 17 February 2026 06:14:33 +0000 (0:00:01.234) 0:27:48.431 ****** 2026-02-17 06:14:46.665644 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:46.665655 | orchestrator | 2026-02-17 06:14:46.665666 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:14:46.665677 | orchestrator | Tuesday 17 February 2026 06:14:34 +0000 (0:00:01.622) 0:27:50.054 ****** 2026-02-17 06:14:46.665688 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.665698 | orchestrator | 2026-02-17 06:14:46.665709 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:14:46.665720 | orchestrator | Tuesday 17 February 2026 06:14:35 +0000 (0:00:01.160) 0:27:51.214 ****** 2026-02-17 06:14:46.665731 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.665742 | orchestrator | 2026-02-17 06:14:46.665753 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:14:46.665764 | orchestrator | Tuesday 17 February 2026 06:14:37 +0000 (0:00:01.146) 0:27:52.361 ****** 2026-02-17 06:14:46.665775 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:46.665785 | orchestrator | 2026-02-17 06:14:46.665796 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:14:46.665807 | orchestrator | Tuesday 17 February 2026 06:14:38 +0000 (0:00:01.636) 0:27:53.997 ****** 2026-02-17 06:14:46.665818 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:46.665829 | orchestrator | 2026-02-17 06:14:46.665840 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:14:46.665851 | orchestrator | Tuesday 17 February 2026 06:14:40 +0000 (0:00:01.524) 0:27:55.522 ****** 2026-02-17 06:14:46.665861 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.665872 | orchestrator | 2026-02-17 06:14:46.665883 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:14:46.665894 | orchestrator | Tuesday 17 February 2026 06:14:41 +0000 (0:00:00.785) 0:27:56.308 ****** 2026-02-17 06:14:46.665905 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:14:46.665915 | orchestrator | 2026-02-17 06:14:46.665926 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:14:46.665937 | orchestrator | Tuesday 17 February 2026 06:14:41 +0000 (0:00:00.806) 0:27:57.114 ****** 2026-02-17 06:14:46.665948 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.665959 | orchestrator | 2026-02-17 06:14:46.665970 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:14:46.665981 | orchestrator | Tuesday 17 February 2026 06:14:42 +0000 (0:00:00.784) 0:27:57.900 ****** 2026-02-17 06:14:46.665992 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.666002 | orchestrator | 2026-02-17 06:14:46.666014 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:14:46.666097 | orchestrator | Tuesday 17 February 2026 06:14:43 +0000 (0:00:00.777) 0:27:58.677 ****** 2026-02-17 06:14:46.666108 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.666119 | orchestrator | 2026-02-17 06:14:46.666130 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:14:46.666142 | orchestrator | Tuesday 17 February 2026 06:14:44 +0000 (0:00:00.846) 0:27:59.524 ****** 2026-02-17 06:14:46.666153 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.666164 | orchestrator | 2026-02-17 06:14:46.666175 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:14:46.666185 | orchestrator | Tuesday 17 February 2026 06:14:45 +0000 (0:00:00.792) 0:28:00.316 ****** 2026-02-17 06:14:46.666196 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:14:46.666207 | orchestrator | 2026-02-17 06:14:46.666218 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:14:46.666229 | orchestrator | Tuesday 17 February 2026 06:14:45 +0000 (0:00:00.803) 0:28:01.119 ****** 2026-02-17 06:14:46.666251 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.894399 | orchestrator | 2026-02-17 06:15:28.894531 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:15:28.894602 | orchestrator | Tuesday 17 February 2026 06:14:46 +0000 (0:00:00.804) 0:28:01.924 ****** 2026-02-17 06:15:28.894614 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.894626 | orchestrator | 2026-02-17 06:15:28.894638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:15:28.894650 | orchestrator | Tuesday 17 February 2026 06:14:47 +0000 (0:00:00.833) 0:28:02.758 ****** 2026-02-17 06:15:28.894661 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.894672 | orchestrator | 2026-02-17 06:15:28.894683 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:15:28.894694 | orchestrator | Tuesday 17 February 2026 06:14:48 +0000 (0:00:00.827) 0:28:03.586 ****** 2026-02-17 06:15:28.894706 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.894717 | orchestrator | 2026-02-17 06:15:28.894731 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:15:28.894749 | orchestrator | Tuesday 17 February 2026 06:14:49 +0000 (0:00:00.802) 0:28:04.388 ****** 2026-02-17 06:15:28.894767 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.894785 | orchestrator | 2026-02-17 06:15:28.894803 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:15:28.894822 | orchestrator | Tuesday 17 February 2026 06:14:49 +0000 (0:00:00.793) 0:28:05.181 ****** 2026-02-17 06:15:28.894836 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.894847 | orchestrator | 2026-02-17 06:15:28.894858 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:15:28.894869 | orchestrator | Tuesday 17 February 2026 06:14:50 +0000 (0:00:00.791) 0:28:05.973 ****** 2026-02-17 06:15:28.894880 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.894891 | orchestrator | 2026-02-17 06:15:28.894902 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:15:28.894913 | orchestrator | Tuesday 17 February 2026 06:14:51 +0000 (0:00:00.941) 0:28:06.914 ****** 2026-02-17 06:15:28.894925 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.894937 | orchestrator | 2026-02-17 06:15:28.894951 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:15:28.894964 | orchestrator | Tuesday 17 February 2026 06:14:52 +0000 (0:00:00.769) 0:28:07.684 ****** 2026-02-17 06:15:28.894976 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.894989 | orchestrator | 2026-02-17 06:15:28.895001 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:15:28.895013 | orchestrator | Tuesday 17 February 2026 06:14:53 +0000 (0:00:00.760) 0:28:08.444 ****** 2026-02-17 06:15:28.895026 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895038 | orchestrator | 2026-02-17 06:15:28.895051 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:15:28.895088 | orchestrator | Tuesday 17 February 2026 06:14:53 +0000 (0:00:00.775) 0:28:09.220 ****** 2026-02-17 06:15:28.895101 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895114 | orchestrator | 2026-02-17 06:15:28.895127 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:15:28.895140 | orchestrator | Tuesday 17 February 2026 06:14:54 +0000 (0:00:00.820) 0:28:10.041 ****** 2026-02-17 06:15:28.895153 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895165 | orchestrator | 2026-02-17 06:15:28.895178 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:15:28.895191 | orchestrator | Tuesday 17 February 2026 06:14:55 +0000 (0:00:00.787) 0:28:10.829 ****** 2026-02-17 06:15:28.895203 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895216 | orchestrator | 2026-02-17 06:15:28.895228 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:15:28.895241 | orchestrator | Tuesday 17 February 2026 06:14:56 +0000 (0:00:00.828) 0:28:11.658 ****** 2026-02-17 06:15:28.895253 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895266 | orchestrator | 2026-02-17 06:15:28.895279 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:15:28.895292 | orchestrator | Tuesday 17 February 2026 06:14:57 +0000 (0:00:00.898) 0:28:12.556 ****** 2026-02-17 06:15:28.895302 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895313 | orchestrator | 2026-02-17 06:15:28.895324 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:15:28.895335 | orchestrator | Tuesday 17 February 2026 06:14:58 +0000 (0:00:00.785) 0:28:13.342 ****** 2026-02-17 06:15:28.895353 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.895372 | orchestrator | 2026-02-17 06:15:28.895389 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:15:28.895405 | orchestrator | Tuesday 17 February 2026 06:14:59 +0000 (0:00:01.615) 0:28:14.958 ****** 2026-02-17 06:15:28.895423 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.895441 | orchestrator | 2026-02-17 06:15:28.895460 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:15:28.895480 | orchestrator | Tuesday 17 February 2026 06:15:01 +0000 (0:00:02.117) 0:28:17.075 ****** 2026-02-17 06:15:28.895498 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-17 06:15:28.895519 | orchestrator | 2026-02-17 06:15:28.895571 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:15:28.895591 | orchestrator | Tuesday 17 February 2026 06:15:02 +0000 (0:00:01.146) 0:28:18.222 ****** 2026-02-17 06:15:28.895611 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895628 | orchestrator | 2026-02-17 06:15:28.895646 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:15:28.895657 | orchestrator | Tuesday 17 February 2026 06:15:04 +0000 (0:00:01.155) 0:28:19.377 ****** 2026-02-17 06:15:28.895672 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895689 | orchestrator | 2026-02-17 06:15:28.895707 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:15:28.895726 | orchestrator | Tuesday 17 February 2026 06:15:05 +0000 (0:00:01.149) 0:28:20.527 ****** 2026-02-17 06:15:28.895766 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:15:28.895787 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:15:28.895799 | orchestrator | 2026-02-17 06:15:28.895810 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:15:28.895821 | orchestrator | Tuesday 17 February 2026 06:15:07 +0000 (0:00:01.859) 0:28:22.387 ****** 2026-02-17 06:15:28.895832 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.895843 | orchestrator | 2026-02-17 06:15:28.895854 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:15:28.895874 | orchestrator | Tuesday 17 February 2026 06:15:08 +0000 (0:00:01.521) 0:28:23.908 ****** 2026-02-17 06:15:28.895885 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895896 | orchestrator | 2026-02-17 06:15:28.895907 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:15:28.895918 | orchestrator | Tuesday 17 February 2026 06:15:09 +0000 (0:00:01.249) 0:28:25.158 ****** 2026-02-17 06:15:28.895929 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895940 | orchestrator | 2026-02-17 06:15:28.895951 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:15:28.895962 | orchestrator | Tuesday 17 February 2026 06:15:10 +0000 (0:00:00.843) 0:28:26.001 ****** 2026-02-17 06:15:28.895972 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.895983 | orchestrator | 2026-02-17 06:15:28.895995 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:15:28.896005 | orchestrator | Tuesday 17 February 2026 06:15:11 +0000 (0:00:00.782) 0:28:26.783 ****** 2026-02-17 06:15:28.896016 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-17 06:15:28.896027 | orchestrator | 2026-02-17 06:15:28.896038 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:15:28.896049 | orchestrator | Tuesday 17 February 2026 06:15:12 +0000 (0:00:01.116) 0:28:27.900 ****** 2026-02-17 06:15:28.896060 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.896071 | orchestrator | 2026-02-17 06:15:28.896082 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:15:28.896092 | orchestrator | Tuesday 17 February 2026 06:15:14 +0000 (0:00:01.817) 0:28:29.717 ****** 2026-02-17 06:15:28.896103 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:15:28.896114 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:15:28.896125 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:15:28.896136 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896147 | orchestrator | 2026-02-17 06:15:28.896158 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:15:28.896169 | orchestrator | Tuesday 17 February 2026 06:15:15 +0000 (0:00:01.217) 0:28:30.935 ****** 2026-02-17 06:15:28.896180 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896191 | orchestrator | 2026-02-17 06:15:28.896202 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:15:28.896212 | orchestrator | Tuesday 17 February 2026 06:15:16 +0000 (0:00:01.116) 0:28:32.051 ****** 2026-02-17 06:15:28.896223 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896234 | orchestrator | 2026-02-17 06:15:28.896245 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:15:28.896256 | orchestrator | Tuesday 17 February 2026 06:15:17 +0000 (0:00:01.182) 0:28:33.233 ****** 2026-02-17 06:15:28.896267 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896278 | orchestrator | 2026-02-17 06:15:28.896291 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:15:28.896309 | orchestrator | Tuesday 17 February 2026 06:15:19 +0000 (0:00:01.213) 0:28:34.447 ****** 2026-02-17 06:15:28.896325 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896339 | orchestrator | 2026-02-17 06:15:28.896355 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:15:28.896374 | orchestrator | Tuesday 17 February 2026 06:15:20 +0000 (0:00:01.187) 0:28:35.634 ****** 2026-02-17 06:15:28.896392 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896410 | orchestrator | 2026-02-17 06:15:28.896428 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:15:28.896447 | orchestrator | Tuesday 17 February 2026 06:15:21 +0000 (0:00:00.817) 0:28:36.452 ****** 2026-02-17 06:15:28.896464 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.896488 | orchestrator | 2026-02-17 06:15:28.896499 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:15:28.896510 | orchestrator | Tuesday 17 February 2026 06:15:23 +0000 (0:00:02.240) 0:28:38.693 ****** 2026-02-17 06:15:28.896520 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:15:28.896531 | orchestrator | 2026-02-17 06:15:28.896570 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:15:28.896581 | orchestrator | Tuesday 17 February 2026 06:15:24 +0000 (0:00:00.814) 0:28:39.507 ****** 2026-02-17 06:15:28.896592 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-17 06:15:28.896603 | orchestrator | 2026-02-17 06:15:28.896614 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:15:28.896625 | orchestrator | Tuesday 17 February 2026 06:15:25 +0000 (0:00:01.156) 0:28:40.663 ****** 2026-02-17 06:15:28.896636 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896647 | orchestrator | 2026-02-17 06:15:28.896658 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:15:28.896669 | orchestrator | Tuesday 17 February 2026 06:15:26 +0000 (0:00:01.194) 0:28:41.858 ****** 2026-02-17 06:15:28.896680 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896691 | orchestrator | 2026-02-17 06:15:28.896701 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:15:28.896712 | orchestrator | Tuesday 17 February 2026 06:15:27 +0000 (0:00:01.123) 0:28:42.982 ****** 2026-02-17 06:15:28.896723 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:15:28.896734 | orchestrator | 2026-02-17 06:15:28.896759 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:16:03.639516 | orchestrator | Tuesday 17 February 2026 06:15:28 +0000 (0:00:01.169) 0:28:44.151 ****** 2026-02-17 06:16:03.639717 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.639736 | orchestrator | 2026-02-17 06:16:03.639748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:16:03.639760 | orchestrator | Tuesday 17 February 2026 06:15:30 +0000 (0:00:01.167) 0:28:45.319 ****** 2026-02-17 06:16:03.639771 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.639783 | orchestrator | 2026-02-17 06:16:03.639794 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:16:03.639805 | orchestrator | Tuesday 17 February 2026 06:15:31 +0000 (0:00:01.187) 0:28:46.506 ****** 2026-02-17 06:16:03.639816 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.639827 | orchestrator | 2026-02-17 06:16:03.639838 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:16:03.639849 | orchestrator | Tuesday 17 February 2026 06:15:32 +0000 (0:00:01.161) 0:28:47.667 ****** 2026-02-17 06:16:03.639860 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.639871 | orchestrator | 2026-02-17 06:16:03.639882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:16:03.639893 | orchestrator | Tuesday 17 February 2026 06:15:33 +0000 (0:00:01.189) 0:28:48.857 ****** 2026-02-17 06:16:03.639904 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.639915 | orchestrator | 2026-02-17 06:16:03.639926 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:16:03.639937 | orchestrator | Tuesday 17 February 2026 06:15:34 +0000 (0:00:01.140) 0:28:49.997 ****** 2026-02-17 06:16:03.639948 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:16:03.639960 | orchestrator | 2026-02-17 06:16:03.639971 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:16:03.639982 | orchestrator | Tuesday 17 February 2026 06:15:35 +0000 (0:00:00.823) 0:28:50.820 ****** 2026-02-17 06:16:03.639993 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-17 06:16:03.640005 | orchestrator | 2026-02-17 06:16:03.640016 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:16:03.640027 | orchestrator | Tuesday 17 February 2026 06:15:36 +0000 (0:00:01.123) 0:28:51.944 ****** 2026-02-17 06:16:03.640061 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-17 06:16:03.640075 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-17 06:16:03.640088 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-17 06:16:03.640100 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-17 06:16:03.640112 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-17 06:16:03.640125 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-17 06:16:03.640137 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-17 06:16:03.640150 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:16:03.640162 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:16:03.640175 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:16:03.640187 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:16:03.640200 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:16:03.640212 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:16:03.640224 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:16:03.640236 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-17 06:16:03.640249 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-17 06:16:03.640261 | orchestrator | 2026-02-17 06:16:03.640274 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:16:03.640287 | orchestrator | Tuesday 17 February 2026 06:15:43 +0000 (0:00:06.450) 0:28:58.394 ****** 2026-02-17 06:16:03.640299 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640312 | orchestrator | 2026-02-17 06:16:03.640324 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:16:03.640337 | orchestrator | Tuesday 17 February 2026 06:15:43 +0000 (0:00:00.774) 0:28:59.169 ****** 2026-02-17 06:16:03.640349 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640361 | orchestrator | 2026-02-17 06:16:03.640374 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:16:03.640387 | orchestrator | Tuesday 17 February 2026 06:15:44 +0000 (0:00:00.843) 0:29:00.012 ****** 2026-02-17 06:16:03.640399 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640411 | orchestrator | 2026-02-17 06:16:03.640422 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:16:03.640433 | orchestrator | Tuesday 17 February 2026 06:15:45 +0000 (0:00:00.841) 0:29:00.854 ****** 2026-02-17 06:16:03.640444 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640455 | orchestrator | 2026-02-17 06:16:03.640466 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:16:03.640477 | orchestrator | Tuesday 17 February 2026 06:15:46 +0000 (0:00:00.762) 0:29:01.616 ****** 2026-02-17 06:16:03.640488 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640498 | orchestrator | 2026-02-17 06:16:03.640509 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:16:03.640520 | orchestrator | Tuesday 17 February 2026 06:15:47 +0000 (0:00:00.796) 0:29:02.412 ****** 2026-02-17 06:16:03.640531 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640542 | orchestrator | 2026-02-17 06:16:03.640591 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:16:03.640604 | orchestrator | Tuesday 17 February 2026 06:15:47 +0000 (0:00:00.777) 0:29:03.190 ****** 2026-02-17 06:16:03.640615 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640626 | orchestrator | 2026-02-17 06:16:03.640669 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:16:03.640683 | orchestrator | Tuesday 17 February 2026 06:15:48 +0000 (0:00:00.830) 0:29:04.020 ****** 2026-02-17 06:16:03.640694 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640713 | orchestrator | 2026-02-17 06:16:03.640724 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:16:03.640735 | orchestrator | Tuesday 17 February 2026 06:15:49 +0000 (0:00:00.778) 0:29:04.799 ****** 2026-02-17 06:16:03.640746 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640757 | orchestrator | 2026-02-17 06:16:03.640768 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:16:03.640779 | orchestrator | Tuesday 17 February 2026 06:15:50 +0000 (0:00:00.784) 0:29:05.584 ****** 2026-02-17 06:16:03.640790 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640801 | orchestrator | 2026-02-17 06:16:03.640812 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:16:03.640823 | orchestrator | Tuesday 17 February 2026 06:15:51 +0000 (0:00:00.804) 0:29:06.389 ****** 2026-02-17 06:16:03.640833 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640844 | orchestrator | 2026-02-17 06:16:03.640855 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:16:03.640866 | orchestrator | Tuesday 17 February 2026 06:15:51 +0000 (0:00:00.798) 0:29:07.187 ****** 2026-02-17 06:16:03.640877 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640888 | orchestrator | 2026-02-17 06:16:03.640899 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:16:03.640909 | orchestrator | Tuesday 17 February 2026 06:15:52 +0000 (0:00:00.759) 0:29:07.947 ****** 2026-02-17 06:16:03.640920 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640931 | orchestrator | 2026-02-17 06:16:03.640942 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:16:03.640953 | orchestrator | Tuesday 17 February 2026 06:15:53 +0000 (0:00:00.970) 0:29:08.917 ****** 2026-02-17 06:16:03.640964 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.640975 | orchestrator | 2026-02-17 06:16:03.640986 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:16:03.640997 | orchestrator | Tuesday 17 February 2026 06:15:54 +0000 (0:00:00.838) 0:29:09.756 ****** 2026-02-17 06:16:03.641008 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641019 | orchestrator | 2026-02-17 06:16:03.641029 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:16:03.641040 | orchestrator | Tuesday 17 February 2026 06:15:55 +0000 (0:00:00.930) 0:29:10.686 ****** 2026-02-17 06:16:03.641051 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641062 | orchestrator | 2026-02-17 06:16:03.641073 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:16:03.641083 | orchestrator | Tuesday 17 February 2026 06:15:56 +0000 (0:00:00.758) 0:29:11.444 ****** 2026-02-17 06:16:03.641094 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641105 | orchestrator | 2026-02-17 06:16:03.641116 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:16:03.641128 | orchestrator | Tuesday 17 February 2026 06:15:56 +0000 (0:00:00.798) 0:29:12.243 ****** 2026-02-17 06:16:03.641139 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641150 | orchestrator | 2026-02-17 06:16:03.641161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:16:03.641172 | orchestrator | Tuesday 17 February 2026 06:15:57 +0000 (0:00:00.831) 0:29:13.074 ****** 2026-02-17 06:16:03.641183 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641194 | orchestrator | 2026-02-17 06:16:03.641205 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:16:03.641216 | orchestrator | Tuesday 17 February 2026 06:15:58 +0000 (0:00:00.837) 0:29:13.912 ****** 2026-02-17 06:16:03.641227 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641238 | orchestrator | 2026-02-17 06:16:03.641249 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:16:03.641260 | orchestrator | Tuesday 17 February 2026 06:15:59 +0000 (0:00:00.820) 0:29:14.732 ****** 2026-02-17 06:16:03.641277 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641288 | orchestrator | 2026-02-17 06:16:03.641299 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:16:03.641310 | orchestrator | Tuesday 17 February 2026 06:16:00 +0000 (0:00:00.840) 0:29:15.573 ****** 2026-02-17 06:16:03.641321 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 06:16:03.641332 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 06:16:03.641342 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 06:16:03.641353 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641365 | orchestrator | 2026-02-17 06:16:03.641375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:16:03.641386 | orchestrator | Tuesday 17 February 2026 06:16:01 +0000 (0:00:01.130) 0:29:16.703 ****** 2026-02-17 06:16:03.641398 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 06:16:03.641408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 06:16:03.641419 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 06:16:03.641430 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641441 | orchestrator | 2026-02-17 06:16:03.641452 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:16:03.641463 | orchestrator | Tuesday 17 February 2026 06:16:02 +0000 (0:00:01.066) 0:29:17.769 ****** 2026-02-17 06:16:03.641473 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-17 06:16:03.641484 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-17 06:16:03.641495 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-17 06:16:03.641506 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:16:03.641517 | orchestrator | 2026-02-17 06:16:03.641539 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:17:04.465411 | orchestrator | Tuesday 17 February 2026 06:16:03 +0000 (0:00:01.123) 0:29:18.892 ****** 2026-02-17 06:17:04.465541 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:17:04.465558 | orchestrator | 2026-02-17 06:17:04.465569 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:17:04.465579 | orchestrator | Tuesday 17 February 2026 06:16:04 +0000 (0:00:00.825) 0:29:19.718 ****** 2026-02-17 06:17:04.465658 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-17 06:17:04.465669 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:17:04.465678 | orchestrator | 2026-02-17 06:17:04.465689 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:17:04.465699 | orchestrator | Tuesday 17 February 2026 06:16:05 +0000 (0:00:00.974) 0:29:20.692 ****** 2026-02-17 06:17:04.465709 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:17:04.465719 | orchestrator | 2026-02-17 06:17:04.465729 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:17:04.465739 | orchestrator | Tuesday 17 February 2026 06:16:06 +0000 (0:00:01.408) 0:29:22.101 ****** 2026-02-17 06:17:04.465749 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:17:04.465760 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-17 06:17:04.465770 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:17:04.465781 | orchestrator | 2026-02-17 06:17:04.465792 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-17 06:17:04.465802 | orchestrator | Tuesday 17 February 2026 06:16:08 +0000 (0:00:01.703) 0:29:23.805 ****** 2026-02-17 06:17:04.465811 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-17 06:17:04.465821 | orchestrator | 2026-02-17 06:17:04.465831 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-17 06:17:04.465841 | orchestrator | Tuesday 17 February 2026 06:16:09 +0000 (0:00:01.124) 0:29:24.929 ****** 2026-02-17 06:17:04.465874 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:17:04.465885 | orchestrator | 2026-02-17 06:17:04.465895 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-17 06:17:04.465905 | orchestrator | Tuesday 17 February 2026 06:16:11 +0000 (0:00:01.543) 0:29:26.473 ****** 2026-02-17 06:17:04.465917 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:17:04.465928 | orchestrator | 2026-02-17 06:17:04.465940 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-17 06:17:04.465952 | orchestrator | Tuesday 17 February 2026 06:16:12 +0000 (0:00:01.195) 0:29:27.668 ****** 2026-02-17 06:17:04.465964 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:17:04.465975 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:17:04.465986 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:17:04.465997 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-17 06:17:04.466009 | orchestrator | 2026-02-17 06:17:04.466081 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-17 06:17:04.466093 | orchestrator | Tuesday 17 February 2026 06:16:19 +0000 (0:00:07.068) 0:29:34.736 ****** 2026-02-17 06:17:04.466104 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:17:04.466116 | orchestrator | 2026-02-17 06:17:04.466127 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-17 06:17:04.466139 | orchestrator | Tuesday 17 February 2026 06:16:20 +0000 (0:00:01.175) 0:29:35.912 ****** 2026-02-17 06:17:04.466150 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-17 06:17:04.466162 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-17 06:17:04.466173 | orchestrator | 2026-02-17 06:17:04.466184 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:17:04.466196 | orchestrator | Tuesday 17 February 2026 06:16:23 +0000 (0:00:03.264) 0:29:39.177 ****** 2026-02-17 06:17:04.466207 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-17 06:17:04.466221 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-17 06:17:04.466238 | orchestrator | 2026-02-17 06:17:04.466254 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-17 06:17:04.466269 | orchestrator | Tuesday 17 February 2026 06:16:26 +0000 (0:00:02.145) 0:29:41.323 ****** 2026-02-17 06:17:04.466287 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:17:04.466304 | orchestrator | 2026-02-17 06:17:04.466320 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-17 06:17:04.466336 | orchestrator | Tuesday 17 February 2026 06:16:27 +0000 (0:00:01.551) 0:29:42.874 ****** 2026-02-17 06:17:04.466347 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:17:04.466356 | orchestrator | 2026-02-17 06:17:04.466366 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-17 06:17:04.466376 | orchestrator | Tuesday 17 February 2026 06:16:28 +0000 (0:00:00.789) 0:29:43.663 ****** 2026-02-17 06:17:04.466386 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:17:04.466396 | orchestrator | 2026-02-17 06:17:04.466406 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-17 06:17:04.466415 | orchestrator | Tuesday 17 February 2026 06:16:29 +0000 (0:00:00.784) 0:29:44.448 ****** 2026-02-17 06:17:04.466426 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-17 06:17:04.466435 | orchestrator | 2026-02-17 06:17:04.466445 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-17 06:17:04.466455 | orchestrator | Tuesday 17 February 2026 06:16:30 +0000 (0:00:01.178) 0:29:45.627 ****** 2026-02-17 06:17:04.466465 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:17:04.466475 | orchestrator | 2026-02-17 06:17:04.466485 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-17 06:17:04.466495 | orchestrator | Tuesday 17 February 2026 06:16:31 +0000 (0:00:01.139) 0:29:46.767 ****** 2026-02-17 06:17:04.466525 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:17:04.466536 | orchestrator | 2026-02-17 06:17:04.466565 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-17 06:17:04.466575 | orchestrator | Tuesday 17 February 2026 06:16:32 +0000 (0:00:01.185) 0:29:47.952 ****** 2026-02-17 06:17:04.466606 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-17 06:17:04.466616 | orchestrator | 2026-02-17 06:17:04.466626 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-17 06:17:04.466635 | orchestrator | Tuesday 17 February 2026 06:16:33 +0000 (0:00:01.284) 0:29:49.237 ****** 2026-02-17 06:17:04.466645 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:17:04.466655 | orchestrator | 2026-02-17 06:17:04.466665 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-17 06:17:04.466674 | orchestrator | Tuesday 17 February 2026 06:16:36 +0000 (0:00:02.067) 0:29:51.304 ****** 2026-02-17 06:17:04.466684 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:17:04.466694 | orchestrator | 2026-02-17 06:17:04.466704 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-17 06:17:04.466713 | orchestrator | Tuesday 17 February 2026 06:16:38 +0000 (0:00:02.013) 0:29:53.317 ****** 2026-02-17 06:17:04.466723 | orchestrator | ok: [testbed-node-1] 2026-02-17 06:17:04.466733 | orchestrator | 2026-02-17 06:17:04.466743 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-17 06:17:04.466752 | orchestrator | Tuesday 17 February 2026 06:16:40 +0000 (0:00:02.465) 0:29:55.783 ****** 2026-02-17 06:17:04.466762 | orchestrator | changed: [testbed-node-1] 2026-02-17 06:17:04.466772 | orchestrator | 2026-02-17 06:17:04.466781 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-17 06:17:04.466791 | orchestrator | Tuesday 17 February 2026 06:16:44 +0000 (0:00:03.737) 0:29:59.520 ****** 2026-02-17 06:17:04.466801 | orchestrator | skipping: [testbed-node-1] 2026-02-17 06:17:04.466810 | orchestrator | 2026-02-17 06:17:04.466820 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-17 06:17:04.466830 | orchestrator | 2026-02-17 06:17:04.466840 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-17 06:17:04.466849 | orchestrator | Tuesday 17 February 2026 06:16:45 +0000 (0:00:01.112) 0:30:00.632 ****** 2026-02-17 06:17:04.466859 | orchestrator | changed: [testbed-node-2] 2026-02-17 06:17:04.466869 | orchestrator | 2026-02-17 06:17:04.466879 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-17 06:17:04.466888 | orchestrator | Tuesday 17 February 2026 06:16:47 +0000 (0:00:02.547) 0:30:03.180 ****** 2026-02-17 06:17:04.466898 | orchestrator | changed: [testbed-node-2] 2026-02-17 06:17:04.466908 | orchestrator | 2026-02-17 06:17:04.466917 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:17:04.466927 | orchestrator | Tuesday 17 February 2026 06:16:50 +0000 (0:00:02.141) 0:30:05.321 ****** 2026-02-17 06:17:04.466937 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-17 06:17:04.466946 | orchestrator | 2026-02-17 06:17:04.466956 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:17:04.466966 | orchestrator | Tuesday 17 February 2026 06:16:51 +0000 (0:00:01.143) 0:30:06.464 ****** 2026-02-17 06:17:04.466975 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:04.466985 | orchestrator | 2026-02-17 06:17:04.466995 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:17:04.467004 | orchestrator | Tuesday 17 February 2026 06:16:52 +0000 (0:00:01.511) 0:30:07.976 ****** 2026-02-17 06:17:04.467014 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:04.467024 | orchestrator | 2026-02-17 06:17:04.467034 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:17:04.467043 | orchestrator | Tuesday 17 February 2026 06:16:53 +0000 (0:00:01.259) 0:30:09.235 ****** 2026-02-17 06:17:04.467053 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:04.467069 | orchestrator | 2026-02-17 06:17:04.467079 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:17:04.467089 | orchestrator | Tuesday 17 February 2026 06:16:55 +0000 (0:00:01.549) 0:30:10.785 ****** 2026-02-17 06:17:04.467099 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:04.467113 | orchestrator | 2026-02-17 06:17:04.467132 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:17:04.467142 | orchestrator | Tuesday 17 February 2026 06:16:56 +0000 (0:00:01.173) 0:30:11.959 ****** 2026-02-17 06:17:04.467152 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:04.467162 | orchestrator | 2026-02-17 06:17:04.467172 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:17:04.467182 | orchestrator | Tuesday 17 February 2026 06:16:57 +0000 (0:00:01.170) 0:30:13.130 ****** 2026-02-17 06:17:04.467191 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:04.467201 | orchestrator | 2026-02-17 06:17:04.467211 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:17:04.467221 | orchestrator | Tuesday 17 February 2026 06:16:59 +0000 (0:00:01.188) 0:30:14.318 ****** 2026-02-17 06:17:04.467230 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:04.467240 | orchestrator | 2026-02-17 06:17:04.467250 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:17:04.467260 | orchestrator | Tuesday 17 February 2026 06:17:00 +0000 (0:00:01.210) 0:30:15.529 ****** 2026-02-17 06:17:04.467270 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:04.467280 | orchestrator | 2026-02-17 06:17:04.467289 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:17:04.467299 | orchestrator | Tuesday 17 February 2026 06:17:01 +0000 (0:00:01.135) 0:30:16.665 ****** 2026-02-17 06:17:04.467309 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:17:04.467318 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:17:04.467328 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:17:04.467338 | orchestrator | 2026-02-17 06:17:04.467348 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:17:04.467363 | orchestrator | Tuesday 17 February 2026 06:17:03 +0000 (0:00:01.758) 0:30:18.423 ****** 2026-02-17 06:17:04.467383 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:29.608819 | orchestrator | 2026-02-17 06:17:29.608913 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:17:29.608924 | orchestrator | Tuesday 17 February 2026 06:17:04 +0000 (0:00:01.301) 0:30:19.725 ****** 2026-02-17 06:17:29.608932 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:17:29.608941 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:17:29.608949 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:17:29.608956 | orchestrator | 2026-02-17 06:17:29.608965 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:17:29.608972 | orchestrator | Tuesday 17 February 2026 06:17:08 +0000 (0:00:04.223) 0:30:23.949 ****** 2026-02-17 06:17:29.608980 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 06:17:29.608987 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 06:17:29.608995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 06:17:29.609002 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609010 | orchestrator | 2026-02-17 06:17:29.609018 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:17:29.609025 | orchestrator | Tuesday 17 February 2026 06:17:10 +0000 (0:00:01.417) 0:30:25.366 ****** 2026-02-17 06:17:29.609034 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:17:29.609063 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:17:29.609071 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:17:29.609079 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609086 | orchestrator | 2026-02-17 06:17:29.609094 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:17:29.609101 | orchestrator | Tuesday 17 February 2026 06:17:12 +0000 (0:00:01.972) 0:30:27.339 ****** 2026-02-17 06:17:29.609111 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:29.609121 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:29.609129 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:29.609137 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609144 | orchestrator | 2026-02-17 06:17:29.609151 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:17:29.609159 | orchestrator | Tuesday 17 February 2026 06:17:13 +0000 (0:00:01.228) 0:30:28.568 ****** 2026-02-17 06:17:29.609193 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:17:04.946709', 'end': '2026-02-17 06:17:05.001311', 'delta': '0:00:00.054602', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:17:29.609205 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:17:05.501210', 'end': '2026-02-17 06:17:06.554373', 'delta': '0:00:01.053163', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:17:29.609219 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:17:07.397254', 'end': '2026-02-17 06:17:07.442205', 'delta': '0:00:00.044951', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:17:29.609227 | orchestrator | 2026-02-17 06:17:29.609235 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:17:29.609242 | orchestrator | Tuesday 17 February 2026 06:17:14 +0000 (0:00:01.276) 0:30:29.844 ****** 2026-02-17 06:17:29.609249 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:29.609257 | orchestrator | 2026-02-17 06:17:29.609264 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:17:29.609272 | orchestrator | Tuesday 17 February 2026 06:17:15 +0000 (0:00:01.301) 0:30:31.146 ****** 2026-02-17 06:17:29.609279 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609286 | orchestrator | 2026-02-17 06:17:29.609293 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:17:29.609301 | orchestrator | Tuesday 17 February 2026 06:17:17 +0000 (0:00:01.316) 0:30:32.463 ****** 2026-02-17 06:17:29.609308 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:29.609315 | orchestrator | 2026-02-17 06:17:29.609323 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:17:29.609330 | orchestrator | Tuesday 17 February 2026 06:17:18 +0000 (0:00:01.185) 0:30:33.648 ****** 2026-02-17 06:17:29.609337 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:17:29.609344 | orchestrator | 2026-02-17 06:17:29.609352 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:17:29.609359 | orchestrator | Tuesday 17 February 2026 06:17:20 +0000 (0:00:01.973) 0:30:35.622 ****** 2026-02-17 06:17:29.609366 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:29.609374 | orchestrator | 2026-02-17 06:17:29.609381 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:17:29.609390 | orchestrator | Tuesday 17 February 2026 06:17:21 +0000 (0:00:01.133) 0:30:36.755 ****** 2026-02-17 06:17:29.609398 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609406 | orchestrator | 2026-02-17 06:17:29.609415 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:17:29.609423 | orchestrator | Tuesday 17 February 2026 06:17:22 +0000 (0:00:01.156) 0:30:37.911 ****** 2026-02-17 06:17:29.609431 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609439 | orchestrator | 2026-02-17 06:17:29.609447 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:17:29.609456 | orchestrator | Tuesday 17 February 2026 06:17:23 +0000 (0:00:01.210) 0:30:39.122 ****** 2026-02-17 06:17:29.609464 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609472 | orchestrator | 2026-02-17 06:17:29.609481 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:17:29.609489 | orchestrator | Tuesday 17 February 2026 06:17:24 +0000 (0:00:01.149) 0:30:40.272 ****** 2026-02-17 06:17:29.609497 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609505 | orchestrator | 2026-02-17 06:17:29.609513 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:17:29.609522 | orchestrator | Tuesday 17 February 2026 06:17:26 +0000 (0:00:01.119) 0:30:41.391 ****** 2026-02-17 06:17:29.609531 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609539 | orchestrator | 2026-02-17 06:17:29.609547 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:17:29.609561 | orchestrator | Tuesday 17 February 2026 06:17:27 +0000 (0:00:01.136) 0:30:42.528 ****** 2026-02-17 06:17:29.609569 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609577 | orchestrator | 2026-02-17 06:17:29.609586 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:17:29.609619 | orchestrator | Tuesday 17 February 2026 06:17:28 +0000 (0:00:01.203) 0:30:43.731 ****** 2026-02-17 06:17:29.609630 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:29.609638 | orchestrator | 2026-02-17 06:17:29.609647 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:17:29.609665 | orchestrator | Tuesday 17 February 2026 06:17:29 +0000 (0:00:01.128) 0:30:44.860 ****** 2026-02-17 06:17:34.450120 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:34.450198 | orchestrator | 2026-02-17 06:17:34.450206 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:17:34.450213 | orchestrator | Tuesday 17 February 2026 06:17:30 +0000 (0:00:01.180) 0:30:46.040 ****** 2026-02-17 06:17:34.450219 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:34.450225 | orchestrator | 2026-02-17 06:17:34.450230 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:17:34.450236 | orchestrator | Tuesday 17 February 2026 06:17:31 +0000 (0:00:01.130) 0:30:47.171 ****** 2026-02-17 06:17:34.450243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:17:34.450252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:17:34.450257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:17:34.450263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:17:34.450271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:17:34.450277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:17:34.450308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:17:34.450350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f3163655', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:17:34.450362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:17:34.450370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:17:34.450378 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:34.450386 | orchestrator | 2026-02-17 06:17:34.450395 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:17:34.450403 | orchestrator | Tuesday 17 February 2026 06:17:33 +0000 (0:00:01.306) 0:30:48.477 ****** 2026-02-17 06:17:34.450420 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:34.450431 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:34.450450 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:45.709301 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:45.709447 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:45.709471 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:45.709491 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:45.709655 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f3163655', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3163655-9995-491d-8d46-91e3626b16e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:45.709689 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:45.709710 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:17:45.709742 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:45.709764 | orchestrator | 2026-02-17 06:17:45.709786 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:17:45.709807 | orchestrator | Tuesday 17 February 2026 06:17:34 +0000 (0:00:01.237) 0:30:49.715 ****** 2026-02-17 06:17:45.709825 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:45.709840 | orchestrator | 2026-02-17 06:17:45.709853 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:17:45.709866 | orchestrator | Tuesday 17 February 2026 06:17:35 +0000 (0:00:01.548) 0:30:51.263 ****** 2026-02-17 06:17:45.709878 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:45.709891 | orchestrator | 2026-02-17 06:17:45.709904 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:17:45.709917 | orchestrator | Tuesday 17 February 2026 06:17:37 +0000 (0:00:01.170) 0:30:52.433 ****** 2026-02-17 06:17:45.709928 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:17:45.709939 | orchestrator | 2026-02-17 06:17:45.709950 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:17:45.709961 | orchestrator | Tuesday 17 February 2026 06:17:38 +0000 (0:00:01.547) 0:30:53.981 ****** 2026-02-17 06:17:45.709972 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:45.709983 | orchestrator | 2026-02-17 06:17:45.709994 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:17:45.710005 | orchestrator | Tuesday 17 February 2026 06:17:39 +0000 (0:00:01.132) 0:30:55.113 ****** 2026-02-17 06:17:45.710076 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:45.710089 | orchestrator | 2026-02-17 06:17:45.710100 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:17:45.710111 | orchestrator | Tuesday 17 February 2026 06:17:41 +0000 (0:00:01.337) 0:30:56.451 ****** 2026-02-17 06:17:45.710122 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:45.710133 | orchestrator | 2026-02-17 06:17:45.710144 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:17:45.710155 | orchestrator | Tuesday 17 February 2026 06:17:42 +0000 (0:00:01.183) 0:30:57.634 ****** 2026-02-17 06:17:45.710167 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-17 06:17:45.710178 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-17 06:17:45.710196 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:17:45.710207 | orchestrator | 2026-02-17 06:17:45.710218 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:17:45.710229 | orchestrator | Tuesday 17 February 2026 06:17:44 +0000 (0:00:02.068) 0:30:59.702 ****** 2026-02-17 06:17:45.710240 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-17 06:17:45.710251 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-17 06:17:45.710262 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-17 06:17:45.710273 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:17:45.710298 | orchestrator | 2026-02-17 06:17:45.710321 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:18:22.777283 | orchestrator | Tuesday 17 February 2026 06:17:45 +0000 (0:00:01.258) 0:31:00.960 ****** 2026-02-17 06:18:22.777397 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.777414 | orchestrator | 2026-02-17 06:18:22.777427 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:18:22.777439 | orchestrator | Tuesday 17 February 2026 06:17:46 +0000 (0:00:01.175) 0:31:02.136 ****** 2026-02-17 06:18:22.777451 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:18:22.777462 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:18:22.777475 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:18:22.777548 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:18:22.777569 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:18:22.777588 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:18:22.777600 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:18:22.777611 | orchestrator | 2026-02-17 06:18:22.777650 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:18:22.777662 | orchestrator | Tuesday 17 February 2026 06:17:49 +0000 (0:00:02.337) 0:31:04.474 ****** 2026-02-17 06:18:22.777672 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:18:22.777683 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:18:22.777694 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:18:22.777705 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:18:22.777716 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:18:22.777727 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:18:22.777738 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:18:22.777749 | orchestrator | 2026-02-17 06:18:22.777760 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:18:22.777770 | orchestrator | Tuesday 17 February 2026 06:17:51 +0000 (0:00:02.279) 0:31:06.754 ****** 2026-02-17 06:18:22.777781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-17 06:18:22.777793 | orchestrator | 2026-02-17 06:18:22.777804 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:18:22.777815 | orchestrator | Tuesday 17 February 2026 06:17:52 +0000 (0:00:01.140) 0:31:07.894 ****** 2026-02-17 06:18:22.777829 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-17 06:18:22.777842 | orchestrator | 2026-02-17 06:18:22.777855 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:18:22.777867 | orchestrator | Tuesday 17 February 2026 06:17:53 +0000 (0:00:01.208) 0:31:09.103 ****** 2026-02-17 06:18:22.777879 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:18:22.777892 | orchestrator | 2026-02-17 06:18:22.777904 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:18:22.777917 | orchestrator | Tuesday 17 February 2026 06:17:55 +0000 (0:00:01.585) 0:31:10.688 ****** 2026-02-17 06:18:22.777930 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.777943 | orchestrator | 2026-02-17 06:18:22.777955 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:18:22.777967 | orchestrator | Tuesday 17 February 2026 06:17:56 +0000 (0:00:01.120) 0:31:11.808 ****** 2026-02-17 06:18:22.777979 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.777992 | orchestrator | 2026-02-17 06:18:22.778005 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:18:22.778073 | orchestrator | Tuesday 17 February 2026 06:17:57 +0000 (0:00:01.130) 0:31:12.939 ****** 2026-02-17 06:18:22.778087 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778099 | orchestrator | 2026-02-17 06:18:22.778112 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:18:22.778124 | orchestrator | Tuesday 17 February 2026 06:17:58 +0000 (0:00:01.194) 0:31:14.134 ****** 2026-02-17 06:18:22.778147 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:18:22.778160 | orchestrator | 2026-02-17 06:18:22.778173 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:18:22.778184 | orchestrator | Tuesday 17 February 2026 06:18:00 +0000 (0:00:01.560) 0:31:15.695 ****** 2026-02-17 06:18:22.778205 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778217 | orchestrator | 2026-02-17 06:18:22.778228 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:18:22.778239 | orchestrator | Tuesday 17 February 2026 06:18:01 +0000 (0:00:01.164) 0:31:16.859 ****** 2026-02-17 06:18:22.778250 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778261 | orchestrator | 2026-02-17 06:18:22.778286 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:18:22.778297 | orchestrator | Tuesday 17 February 2026 06:18:02 +0000 (0:00:01.148) 0:31:18.008 ****** 2026-02-17 06:18:22.778308 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:18:22.778319 | orchestrator | 2026-02-17 06:18:22.778330 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:18:22.778341 | orchestrator | Tuesday 17 February 2026 06:18:04 +0000 (0:00:01.563) 0:31:19.571 ****** 2026-02-17 06:18:22.778352 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:18:22.778363 | orchestrator | 2026-02-17 06:18:22.778374 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:18:22.778403 | orchestrator | Tuesday 17 February 2026 06:18:05 +0000 (0:00:01.572) 0:31:21.144 ****** 2026-02-17 06:18:22.778414 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778425 | orchestrator | 2026-02-17 06:18:22.778436 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:18:22.778447 | orchestrator | Tuesday 17 February 2026 06:18:06 +0000 (0:00:00.791) 0:31:21.935 ****** 2026-02-17 06:18:22.778458 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:18:22.778469 | orchestrator | 2026-02-17 06:18:22.778480 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:18:22.778491 | orchestrator | Tuesday 17 February 2026 06:18:07 +0000 (0:00:00.803) 0:31:22.739 ****** 2026-02-17 06:18:22.778502 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778513 | orchestrator | 2026-02-17 06:18:22.778524 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:18:22.778534 | orchestrator | Tuesday 17 February 2026 06:18:08 +0000 (0:00:00.833) 0:31:23.573 ****** 2026-02-17 06:18:22.778545 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778556 | orchestrator | 2026-02-17 06:18:22.778567 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:18:22.778578 | orchestrator | Tuesday 17 February 2026 06:18:09 +0000 (0:00:00.768) 0:31:24.341 ****** 2026-02-17 06:18:22.778589 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778600 | orchestrator | 2026-02-17 06:18:22.778611 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:18:22.778643 | orchestrator | Tuesday 17 February 2026 06:18:09 +0000 (0:00:00.764) 0:31:25.106 ****** 2026-02-17 06:18:22.778654 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778665 | orchestrator | 2026-02-17 06:18:22.778676 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:18:22.778687 | orchestrator | Tuesday 17 February 2026 06:18:10 +0000 (0:00:00.805) 0:31:25.912 ****** 2026-02-17 06:18:22.778698 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778709 | orchestrator | 2026-02-17 06:18:22.778720 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:18:22.778731 | orchestrator | Tuesday 17 February 2026 06:18:11 +0000 (0:00:00.793) 0:31:26.705 ****** 2026-02-17 06:18:22.778742 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:18:22.778754 | orchestrator | 2026-02-17 06:18:22.778765 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:18:22.778776 | orchestrator | Tuesday 17 February 2026 06:18:12 +0000 (0:00:00.798) 0:31:27.504 ****** 2026-02-17 06:18:22.778786 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:18:22.778797 | orchestrator | 2026-02-17 06:18:22.778809 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:18:22.778820 | orchestrator | Tuesday 17 February 2026 06:18:13 +0000 (0:00:00.820) 0:31:28.325 ****** 2026-02-17 06:18:22.778838 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:18:22.778849 | orchestrator | 2026-02-17 06:18:22.778860 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:18:22.778871 | orchestrator | Tuesday 17 February 2026 06:18:13 +0000 (0:00:00.835) 0:31:29.161 ****** 2026-02-17 06:18:22.778882 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778893 | orchestrator | 2026-02-17 06:18:22.778904 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:18:22.778915 | orchestrator | Tuesday 17 February 2026 06:18:14 +0000 (0:00:00.888) 0:31:30.050 ****** 2026-02-17 06:18:22.778926 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778937 | orchestrator | 2026-02-17 06:18:22.778948 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:18:22.778959 | orchestrator | Tuesday 17 February 2026 06:18:15 +0000 (0:00:00.823) 0:31:30.873 ****** 2026-02-17 06:18:22.778970 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.778980 | orchestrator | 2026-02-17 06:18:22.778991 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:18:22.779002 | orchestrator | Tuesday 17 February 2026 06:18:16 +0000 (0:00:00.779) 0:31:31.653 ****** 2026-02-17 06:18:22.779013 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.779024 | orchestrator | 2026-02-17 06:18:22.779035 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:18:22.779046 | orchestrator | Tuesday 17 February 2026 06:18:17 +0000 (0:00:00.779) 0:31:32.432 ****** 2026-02-17 06:18:22.779057 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.779068 | orchestrator | 2026-02-17 06:18:22.779079 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:18:22.779090 | orchestrator | Tuesday 17 February 2026 06:18:17 +0000 (0:00:00.790) 0:31:33.223 ****** 2026-02-17 06:18:22.779101 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.779112 | orchestrator | 2026-02-17 06:18:22.779123 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:18:22.779134 | orchestrator | Tuesday 17 February 2026 06:18:18 +0000 (0:00:00.790) 0:31:34.013 ****** 2026-02-17 06:18:22.779145 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.779156 | orchestrator | 2026-02-17 06:18:22.779167 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:18:22.779178 | orchestrator | Tuesday 17 February 2026 06:18:19 +0000 (0:00:00.789) 0:31:34.803 ****** 2026-02-17 06:18:22.779189 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.779200 | orchestrator | 2026-02-17 06:18:22.779211 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:18:22.779227 | orchestrator | Tuesday 17 February 2026 06:18:20 +0000 (0:00:00.826) 0:31:35.629 ****** 2026-02-17 06:18:22.779238 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.779249 | orchestrator | 2026-02-17 06:18:22.779260 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:18:22.779271 | orchestrator | Tuesday 17 February 2026 06:18:21 +0000 (0:00:00.766) 0:31:36.396 ****** 2026-02-17 06:18:22.779282 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.779293 | orchestrator | 2026-02-17 06:18:22.779304 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:18:22.779315 | orchestrator | Tuesday 17 February 2026 06:18:21 +0000 (0:00:00.784) 0:31:37.180 ****** 2026-02-17 06:18:22.779326 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:18:22.779337 | orchestrator | 2026-02-17 06:18:22.779355 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:19:08.742893 | orchestrator | Tuesday 17 February 2026 06:18:22 +0000 (0:00:00.854) 0:31:38.035 ****** 2026-02-17 06:19:08.743012 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.743029 | orchestrator | 2026-02-17 06:19:08.743042 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:19:08.743054 | orchestrator | Tuesday 17 February 2026 06:18:23 +0000 (0:00:00.755) 0:31:38.790 ****** 2026-02-17 06:19:08.743089 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:08.743102 | orchestrator | 2026-02-17 06:19:08.743113 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:19:08.743124 | orchestrator | Tuesday 17 February 2026 06:18:25 +0000 (0:00:01.634) 0:31:40.424 ****** 2026-02-17 06:19:08.743135 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:08.743146 | orchestrator | 2026-02-17 06:19:08.743157 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:19:08.743168 | orchestrator | Tuesday 17 February 2026 06:18:27 +0000 (0:00:02.049) 0:31:42.474 ****** 2026-02-17 06:19:08.743179 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-17 06:19:08.743191 | orchestrator | 2026-02-17 06:19:08.743202 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:19:08.743213 | orchestrator | Tuesday 17 February 2026 06:18:28 +0000 (0:00:01.141) 0:31:43.615 ****** 2026-02-17 06:19:08.743224 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.743235 | orchestrator | 2026-02-17 06:19:08.743246 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:19:08.743256 | orchestrator | Tuesday 17 February 2026 06:18:29 +0000 (0:00:01.138) 0:31:44.754 ****** 2026-02-17 06:19:08.743267 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.743278 | orchestrator | 2026-02-17 06:19:08.743289 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:19:08.743300 | orchestrator | Tuesday 17 February 2026 06:18:30 +0000 (0:00:01.163) 0:31:45.917 ****** 2026-02-17 06:19:08.743311 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:19:08.743322 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:19:08.743333 | orchestrator | 2026-02-17 06:19:08.743344 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:19:08.743355 | orchestrator | Tuesday 17 February 2026 06:18:32 +0000 (0:00:01.957) 0:31:47.874 ****** 2026-02-17 06:19:08.743366 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:08.743377 | orchestrator | 2026-02-17 06:19:08.743388 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:19:08.743400 | orchestrator | Tuesday 17 February 2026 06:18:34 +0000 (0:00:01.509) 0:31:49.384 ****** 2026-02-17 06:19:08.743413 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.743425 | orchestrator | 2026-02-17 06:19:08.743439 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:19:08.743451 | orchestrator | Tuesday 17 February 2026 06:18:35 +0000 (0:00:01.188) 0:31:50.572 ****** 2026-02-17 06:19:08.743464 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.743476 | orchestrator | 2026-02-17 06:19:08.743489 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:19:08.743502 | orchestrator | Tuesday 17 February 2026 06:18:36 +0000 (0:00:00.815) 0:31:51.388 ****** 2026-02-17 06:19:08.743514 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.743526 | orchestrator | 2026-02-17 06:19:08.743539 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:19:08.743551 | orchestrator | Tuesday 17 February 2026 06:18:36 +0000 (0:00:00.765) 0:31:52.153 ****** 2026-02-17 06:19:08.743564 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-17 06:19:08.743576 | orchestrator | 2026-02-17 06:19:08.743588 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:19:08.743601 | orchestrator | Tuesday 17 February 2026 06:18:38 +0000 (0:00:01.125) 0:31:53.278 ****** 2026-02-17 06:19:08.743613 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:08.743625 | orchestrator | 2026-02-17 06:19:08.743638 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:19:08.743786 | orchestrator | Tuesday 17 February 2026 06:18:39 +0000 (0:00:01.815) 0:31:55.094 ****** 2026-02-17 06:19:08.743819 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:19:08.743836 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:19:08.743854 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:19:08.743870 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.743886 | orchestrator | 2026-02-17 06:19:08.743903 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:19:08.743918 | orchestrator | Tuesday 17 February 2026 06:18:40 +0000 (0:00:01.173) 0:31:56.268 ****** 2026-02-17 06:19:08.743934 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.743952 | orchestrator | 2026-02-17 06:19:08.743970 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:19:08.744009 | orchestrator | Tuesday 17 February 2026 06:18:42 +0000 (0:00:01.150) 0:31:57.418 ****** 2026-02-17 06:19:08.744029 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744041 | orchestrator | 2026-02-17 06:19:08.744051 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:19:08.744062 | orchestrator | Tuesday 17 February 2026 06:18:43 +0000 (0:00:01.209) 0:31:58.628 ****** 2026-02-17 06:19:08.744073 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744084 | orchestrator | 2026-02-17 06:19:08.744095 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:19:08.744106 | orchestrator | Tuesday 17 February 2026 06:18:44 +0000 (0:00:01.203) 0:31:59.831 ****** 2026-02-17 06:19:08.744117 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744127 | orchestrator | 2026-02-17 06:19:08.744160 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:19:08.744171 | orchestrator | Tuesday 17 February 2026 06:18:45 +0000 (0:00:01.169) 0:32:01.000 ****** 2026-02-17 06:19:08.744183 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744193 | orchestrator | 2026-02-17 06:19:08.744204 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:19:08.744215 | orchestrator | Tuesday 17 February 2026 06:18:46 +0000 (0:00:00.799) 0:32:01.800 ****** 2026-02-17 06:19:08.744226 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:08.744236 | orchestrator | 2026-02-17 06:19:08.744247 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:19:08.744258 | orchestrator | Tuesday 17 February 2026 06:18:48 +0000 (0:00:02.199) 0:32:03.999 ****** 2026-02-17 06:19:08.744269 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:08.744279 | orchestrator | 2026-02-17 06:19:08.744290 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:19:08.744301 | orchestrator | Tuesday 17 February 2026 06:18:49 +0000 (0:00:00.813) 0:32:04.812 ****** 2026-02-17 06:19:08.744312 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-17 06:19:08.744323 | orchestrator | 2026-02-17 06:19:08.744333 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:19:08.744344 | orchestrator | Tuesday 17 February 2026 06:18:50 +0000 (0:00:01.166) 0:32:05.979 ****** 2026-02-17 06:19:08.744355 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744366 | orchestrator | 2026-02-17 06:19:08.744376 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:19:08.744388 | orchestrator | Tuesday 17 February 2026 06:18:51 +0000 (0:00:01.178) 0:32:07.158 ****** 2026-02-17 06:19:08.744398 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744409 | orchestrator | 2026-02-17 06:19:08.744420 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:19:08.744431 | orchestrator | Tuesday 17 February 2026 06:18:53 +0000 (0:00:01.192) 0:32:08.350 ****** 2026-02-17 06:19:08.744441 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744452 | orchestrator | 2026-02-17 06:19:08.744463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:19:08.744483 | orchestrator | Tuesday 17 February 2026 06:18:54 +0000 (0:00:01.260) 0:32:09.611 ****** 2026-02-17 06:19:08.744494 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744505 | orchestrator | 2026-02-17 06:19:08.744516 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:19:08.744527 | orchestrator | Tuesday 17 February 2026 06:18:55 +0000 (0:00:01.201) 0:32:10.813 ****** 2026-02-17 06:19:08.744537 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744548 | orchestrator | 2026-02-17 06:19:08.744559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:19:08.744570 | orchestrator | Tuesday 17 February 2026 06:18:56 +0000 (0:00:01.157) 0:32:11.970 ****** 2026-02-17 06:19:08.744580 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744591 | orchestrator | 2026-02-17 06:19:08.744602 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:19:08.744613 | orchestrator | Tuesday 17 February 2026 06:18:57 +0000 (0:00:01.178) 0:32:13.149 ****** 2026-02-17 06:19:08.744623 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744634 | orchestrator | 2026-02-17 06:19:08.744671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:19:08.744683 | orchestrator | Tuesday 17 February 2026 06:18:59 +0000 (0:00:01.193) 0:32:14.342 ****** 2026-02-17 06:19:08.744694 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:08.744705 | orchestrator | 2026-02-17 06:19:08.744716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:19:08.744727 | orchestrator | Tuesday 17 February 2026 06:19:00 +0000 (0:00:01.150) 0:32:15.493 ****** 2026-02-17 06:19:08.744738 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:08.744749 | orchestrator | 2026-02-17 06:19:08.744760 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:19:08.744771 | orchestrator | Tuesday 17 February 2026 06:19:01 +0000 (0:00:00.832) 0:32:16.325 ****** 2026-02-17 06:19:08.744782 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-17 06:19:08.744793 | orchestrator | 2026-02-17 06:19:08.744804 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:19:08.744815 | orchestrator | Tuesday 17 February 2026 06:19:02 +0000 (0:00:01.216) 0:32:17.542 ****** 2026-02-17 06:19:08.744826 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-17 06:19:08.744837 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-17 06:19:08.744848 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-17 06:19:08.744859 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-17 06:19:08.744870 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-17 06:19:08.744881 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-17 06:19:08.744891 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-17 06:19:08.744902 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:19:08.744919 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:19:08.744930 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:19:08.744941 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:19:08.744952 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:19:08.744963 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:19:08.744974 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:19:08.744984 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-17 06:19:08.744995 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-17 06:19:08.745006 | orchestrator | 2026-02-17 06:19:08.745024 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:19:49.405112 | orchestrator | Tuesday 17 February 2026 06:19:08 +0000 (0:00:06.445) 0:32:23.987 ****** 2026-02-17 06:19:49.405229 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405241 | orchestrator | 2026-02-17 06:19:49.405250 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:19:49.405257 | orchestrator | Tuesday 17 February 2026 06:19:09 +0000 (0:00:00.775) 0:32:24.763 ****** 2026-02-17 06:19:49.405265 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405272 | orchestrator | 2026-02-17 06:19:49.405280 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:19:49.405287 | orchestrator | Tuesday 17 February 2026 06:19:10 +0000 (0:00:00.771) 0:32:25.535 ****** 2026-02-17 06:19:49.405295 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405302 | orchestrator | 2026-02-17 06:19:49.405310 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:19:49.405317 | orchestrator | Tuesday 17 February 2026 06:19:11 +0000 (0:00:00.789) 0:32:26.324 ****** 2026-02-17 06:19:49.405324 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405332 | orchestrator | 2026-02-17 06:19:49.405339 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:19:49.405346 | orchestrator | Tuesday 17 February 2026 06:19:11 +0000 (0:00:00.843) 0:32:27.167 ****** 2026-02-17 06:19:49.405354 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405361 | orchestrator | 2026-02-17 06:19:49.405368 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:19:49.405376 | orchestrator | Tuesday 17 February 2026 06:19:12 +0000 (0:00:00.786) 0:32:27.954 ****** 2026-02-17 06:19:49.405383 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405390 | orchestrator | 2026-02-17 06:19:49.405398 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:19:49.405406 | orchestrator | Tuesday 17 February 2026 06:19:13 +0000 (0:00:00.774) 0:32:28.728 ****** 2026-02-17 06:19:49.405413 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405420 | orchestrator | 2026-02-17 06:19:49.405428 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:19:49.405435 | orchestrator | Tuesday 17 February 2026 06:19:14 +0000 (0:00:00.783) 0:32:29.512 ****** 2026-02-17 06:19:49.405442 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405450 | orchestrator | 2026-02-17 06:19:49.405457 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:19:49.405464 | orchestrator | Tuesday 17 February 2026 06:19:15 +0000 (0:00:00.843) 0:32:30.356 ****** 2026-02-17 06:19:49.405472 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405479 | orchestrator | 2026-02-17 06:19:49.405486 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:19:49.405494 | orchestrator | Tuesday 17 February 2026 06:19:15 +0000 (0:00:00.845) 0:32:31.202 ****** 2026-02-17 06:19:49.405502 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405509 | orchestrator | 2026-02-17 06:19:49.405516 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:19:49.405524 | orchestrator | Tuesday 17 February 2026 06:19:16 +0000 (0:00:00.776) 0:32:31.978 ****** 2026-02-17 06:19:49.405531 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405538 | orchestrator | 2026-02-17 06:19:49.405545 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:19:49.405553 | orchestrator | Tuesday 17 February 2026 06:19:17 +0000 (0:00:00.806) 0:32:32.785 ****** 2026-02-17 06:19:49.405561 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405568 | orchestrator | 2026-02-17 06:19:49.405575 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:19:49.405582 | orchestrator | Tuesday 17 February 2026 06:19:18 +0000 (0:00:00.808) 0:32:33.594 ****** 2026-02-17 06:19:49.405590 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405597 | orchestrator | 2026-02-17 06:19:49.405604 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:19:49.405617 | orchestrator | Tuesday 17 February 2026 06:19:19 +0000 (0:00:00.881) 0:32:34.476 ****** 2026-02-17 06:19:49.405624 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405632 | orchestrator | 2026-02-17 06:19:49.405639 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:19:49.405646 | orchestrator | Tuesday 17 February 2026 06:19:19 +0000 (0:00:00.788) 0:32:35.264 ****** 2026-02-17 06:19:49.405653 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405678 | orchestrator | 2026-02-17 06:19:49.405687 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:19:49.405696 | orchestrator | Tuesday 17 February 2026 06:19:20 +0000 (0:00:00.872) 0:32:36.137 ****** 2026-02-17 06:19:49.405704 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405713 | orchestrator | 2026-02-17 06:19:49.405721 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:19:49.405730 | orchestrator | Tuesday 17 February 2026 06:19:21 +0000 (0:00:00.819) 0:32:36.957 ****** 2026-02-17 06:19:49.405738 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405746 | orchestrator | 2026-02-17 06:19:49.405767 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:19:49.405777 | orchestrator | Tuesday 17 February 2026 06:19:22 +0000 (0:00:00.856) 0:32:37.813 ****** 2026-02-17 06:19:49.405786 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405794 | orchestrator | 2026-02-17 06:19:49.405802 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:19:49.405810 | orchestrator | Tuesday 17 February 2026 06:19:23 +0000 (0:00:00.789) 0:32:38.603 ****** 2026-02-17 06:19:49.405818 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405827 | orchestrator | 2026-02-17 06:19:49.405835 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:19:49.405844 | orchestrator | Tuesday 17 February 2026 06:19:24 +0000 (0:00:00.792) 0:32:39.395 ****** 2026-02-17 06:19:49.405852 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405860 | orchestrator | 2026-02-17 06:19:49.405882 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:19:49.405890 | orchestrator | Tuesday 17 February 2026 06:19:24 +0000 (0:00:00.799) 0:32:40.195 ****** 2026-02-17 06:19:49.405897 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405905 | orchestrator | 2026-02-17 06:19:49.405912 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:19:49.405920 | orchestrator | Tuesday 17 February 2026 06:19:25 +0000 (0:00:00.814) 0:32:41.009 ****** 2026-02-17 06:19:49.405927 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:19:49.405935 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:19:49.405942 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:19:49.405950 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.405957 | orchestrator | 2026-02-17 06:19:49.405965 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:19:49.405972 | orchestrator | Tuesday 17 February 2026 06:19:26 +0000 (0:00:01.104) 0:32:42.113 ****** 2026-02-17 06:19:49.405979 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:19:49.405987 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:19:49.405994 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:19:49.406001 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.406008 | orchestrator | 2026-02-17 06:19:49.406062 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:19:49.406072 | orchestrator | Tuesday 17 February 2026 06:19:27 +0000 (0:00:01.077) 0:32:43.191 ****** 2026-02-17 06:19:49.406079 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-17 06:19:49.406086 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-17 06:19:49.406099 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-17 06:19:49.406107 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.406114 | orchestrator | 2026-02-17 06:19:49.406121 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:19:49.406129 | orchestrator | Tuesday 17 February 2026 06:19:29 +0000 (0:00:01.095) 0:32:44.287 ****** 2026-02-17 06:19:49.406136 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.406143 | orchestrator | 2026-02-17 06:19:49.406151 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:19:49.406158 | orchestrator | Tuesday 17 February 2026 06:19:29 +0000 (0:00:00.796) 0:32:45.084 ****** 2026-02-17 06:19:49.406166 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-17 06:19:49.406174 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.406181 | orchestrator | 2026-02-17 06:19:49.406189 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:19:49.406196 | orchestrator | Tuesday 17 February 2026 06:19:30 +0000 (0:00:00.929) 0:32:46.013 ****** 2026-02-17 06:19:49.406203 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:49.406211 | orchestrator | 2026-02-17 06:19:49.406218 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:19:49.406226 | orchestrator | Tuesday 17 February 2026 06:19:32 +0000 (0:00:01.464) 0:32:47.479 ****** 2026-02-17 06:19:49.406233 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:19:49.406241 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:19:49.406249 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-17 06:19:49.406256 | orchestrator | 2026-02-17 06:19:49.406263 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-17 06:19:49.406271 | orchestrator | Tuesday 17 February 2026 06:19:33 +0000 (0:00:01.703) 0:32:49.182 ****** 2026-02-17 06:19:49.406278 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-17 06:19:49.406286 | orchestrator | 2026-02-17 06:19:49.406293 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-17 06:19:49.406300 | orchestrator | Tuesday 17 February 2026 06:19:35 +0000 (0:00:01.115) 0:32:50.297 ****** 2026-02-17 06:19:49.406307 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:49.406315 | orchestrator | 2026-02-17 06:19:49.406322 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-17 06:19:49.406330 | orchestrator | Tuesday 17 February 2026 06:19:36 +0000 (0:00:01.454) 0:32:51.752 ****** 2026-02-17 06:19:49.406337 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:19:49.406344 | orchestrator | 2026-02-17 06:19:49.406352 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-17 06:19:49.406359 | orchestrator | Tuesday 17 February 2026 06:19:37 +0000 (0:00:01.131) 0:32:52.884 ****** 2026-02-17 06:19:49.406366 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:19:49.406374 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:19:49.406381 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:19:49.406389 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-17 06:19:49.406396 | orchestrator | 2026-02-17 06:19:49.406408 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-17 06:19:49.406415 | orchestrator | Tuesday 17 February 2026 06:19:45 +0000 (0:00:07.439) 0:33:00.323 ****** 2026-02-17 06:19:49.406423 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:19:49.406430 | orchestrator | 2026-02-17 06:19:49.406438 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-17 06:19:49.406445 | orchestrator | Tuesday 17 February 2026 06:19:46 +0000 (0:00:01.175) 0:33:01.498 ****** 2026-02-17 06:19:49.406452 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-17 06:19:49.406460 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-17 06:19:49.406472 | orchestrator | 2026-02-17 06:19:49.406479 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:19:49.406493 | orchestrator | Tuesday 17 February 2026 06:19:49 +0000 (0:00:03.162) 0:33:04.661 ****** 2026-02-17 06:20:31.937893 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-17 06:20:31.938124 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-17 06:20:31.938162 | orchestrator | 2026-02-17 06:20:31.938182 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-17 06:20:31.938204 | orchestrator | Tuesday 17 February 2026 06:19:51 +0000 (0:00:02.010) 0:33:06.672 ****** 2026-02-17 06:20:31.938222 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:20:31.938240 | orchestrator | 2026-02-17 06:20:31.938258 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-17 06:20:31.938275 | orchestrator | Tuesday 17 February 2026 06:19:52 +0000 (0:00:01.489) 0:33:08.162 ****** 2026-02-17 06:20:31.938293 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:20:31.938312 | orchestrator | 2026-02-17 06:20:31.938332 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-17 06:20:31.938386 | orchestrator | Tuesday 17 February 2026 06:19:53 +0000 (0:00:00.818) 0:33:08.980 ****** 2026-02-17 06:20:31.938408 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:20:31.938426 | orchestrator | 2026-02-17 06:20:31.938443 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-17 06:20:31.938462 | orchestrator | Tuesday 17 February 2026 06:19:54 +0000 (0:00:00.756) 0:33:09.737 ****** 2026-02-17 06:20:31.938481 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-17 06:20:31.938499 | orchestrator | 2026-02-17 06:20:31.938518 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-17 06:20:31.938535 | orchestrator | Tuesday 17 February 2026 06:19:55 +0000 (0:00:01.251) 0:33:10.988 ****** 2026-02-17 06:20:31.938553 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:20:31.938570 | orchestrator | 2026-02-17 06:20:31.938587 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-17 06:20:31.938605 | orchestrator | Tuesday 17 February 2026 06:19:56 +0000 (0:00:01.219) 0:33:12.207 ****** 2026-02-17 06:20:31.938622 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:20:31.938640 | orchestrator | 2026-02-17 06:20:31.938659 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-17 06:20:31.938709 | orchestrator | Tuesday 17 February 2026 06:19:58 +0000 (0:00:01.187) 0:33:13.395 ****** 2026-02-17 06:20:31.938730 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-17 06:20:31.938748 | orchestrator | 2026-02-17 06:20:31.938766 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-17 06:20:31.938784 | orchestrator | Tuesday 17 February 2026 06:19:59 +0000 (0:00:01.142) 0:33:14.538 ****** 2026-02-17 06:20:31.938803 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:20:31.938821 | orchestrator | 2026-02-17 06:20:31.938839 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-17 06:20:31.938858 | orchestrator | Tuesday 17 February 2026 06:20:01 +0000 (0:00:02.085) 0:33:16.623 ****** 2026-02-17 06:20:31.938877 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:20:31.938896 | orchestrator | 2026-02-17 06:20:31.938914 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-17 06:20:31.938931 | orchestrator | Tuesday 17 February 2026 06:20:03 +0000 (0:00:02.282) 0:33:18.906 ****** 2026-02-17 06:20:31.938949 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:20:31.938967 | orchestrator | 2026-02-17 06:20:31.938983 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-17 06:20:31.939001 | orchestrator | Tuesday 17 February 2026 06:20:06 +0000 (0:00:02.439) 0:33:21.345 ****** 2026-02-17 06:20:31.939018 | orchestrator | changed: [testbed-node-2] 2026-02-17 06:20:31.939035 | orchestrator | 2026-02-17 06:20:31.939052 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-17 06:20:31.939105 | orchestrator | Tuesday 17 February 2026 06:20:09 +0000 (0:00:03.758) 0:33:25.104 ****** 2026-02-17 06:20:31.939124 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-17 06:20:31.939143 | orchestrator | 2026-02-17 06:20:31.939160 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-17 06:20:31.939178 | orchestrator | Tuesday 17 February 2026 06:20:11 +0000 (0:00:01.528) 0:33:26.633 ****** 2026-02-17 06:20:31.939197 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:20:31.939214 | orchestrator | 2026-02-17 06:20:31.939231 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-17 06:20:31.939248 | orchestrator | Tuesday 17 February 2026 06:20:13 +0000 (0:00:02.407) 0:33:29.040 ****** 2026-02-17 06:20:31.939265 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:20:31.939283 | orchestrator | 2026-02-17 06:20:31.939301 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-17 06:20:31.939317 | orchestrator | Tuesday 17 February 2026 06:20:16 +0000 (0:00:02.739) 0:33:31.780 ****** 2026-02-17 06:20:31.939332 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:20:31.939346 | orchestrator | 2026-02-17 06:20:31.939362 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-17 06:20:31.939377 | orchestrator | Tuesday 17 February 2026 06:20:17 +0000 (0:00:01.363) 0:33:33.143 ****** 2026-02-17 06:20:31.939413 | orchestrator | ok: [testbed-node-2] 2026-02-17 06:20:31.939432 | orchestrator | 2026-02-17 06:20:31.939448 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-17 06:20:31.939462 | orchestrator | Tuesday 17 February 2026 06:20:19 +0000 (0:00:01.169) 0:33:34.313 ****** 2026-02-17 06:20:31.939478 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-17 06:20:31.939493 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-17 06:20:31.939509 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:20:31.939524 | orchestrator | 2026-02-17 06:20:31.939540 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-17 06:20:31.939555 | orchestrator | Tuesday 17 February 2026 06:20:20 +0000 (0:00:01.361) 0:33:35.674 ****** 2026-02-17 06:20:31.939570 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-17 06:20:31.939586 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-17 06:20:31.939631 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-17 06:20:31.939649 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-17 06:20:31.939665 | orchestrator | skipping: [testbed-node-2] 2026-02-17 06:20:31.939713 | orchestrator | 2026-02-17 06:20:31.939729 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-17 06:20:31.939745 | orchestrator | 2026-02-17 06:20:31.939761 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:20:31.939776 | orchestrator | Tuesday 17 February 2026 06:20:22 +0000 (0:00:01.917) 0:33:37.591 ****** 2026-02-17 06:20:31.939792 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:20:31.939808 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:20:31.939824 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:20:31.939840 | orchestrator | 2026-02-17 06:20:31.939855 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:20:31.939872 | orchestrator | Tuesday 17 February 2026 06:20:24 +0000 (0:00:01.878) 0:33:39.470 ****** 2026-02-17 06:20:31.939890 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:20:31.939906 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:20:31.939922 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:20:31.939938 | orchestrator | 2026-02-17 06:20:31.939954 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-17 06:20:31.939970 | orchestrator | Tuesday 17 February 2026 06:20:25 +0000 (0:00:01.570) 0:33:41.041 ****** 2026-02-17 06:20:31.939986 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:20:31.940022 | orchestrator | 2026-02-17 06:20:31.940038 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-17 06:20:31.940053 | orchestrator | Tuesday 17 February 2026 06:20:28 +0000 (0:00:02.850) 0:33:43.892 ****** 2026-02-17 06:20:31.940068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:20:31.940083 | orchestrator | 2026-02-17 06:20:31.940098 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-17 06:20:31.940113 | orchestrator | Tuesday 17 February 2026 06:20:31 +0000 (0:00:02.748) 0:33:46.641 ****** 2026-02-17 06:20:31.940137 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-17T03:45:59.661681+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:31.940191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-17T03:47:13.528315+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:32.705759 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-17T03:47:17.388093+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 2.25, 'score_stable': 2.25, 'optimal_score': 1, 'raw_score_acting': 2.25, 'raw_score_stable': 2.25, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:32.705898 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-17T03:48:16.650059+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '74', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '66', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:32.705940 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-17T03:48:22.861719+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '74', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '68', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:32.705960 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-17T03:48:29.082004+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '74', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '68', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 2.25, 'score_stable': 2.25, 'optimal_score': 1, 'raw_score_acting': 2.25, 'raw_score_stable': 2.25, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:32.705991 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-17T03:48:35.171924+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '187', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '70', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:34.314669 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-17T03:48:40.426411+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '74', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '70', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:34.314836 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-17T03:48:52.651087+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '74', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '72', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:34.314879 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-17T03:49:40.414056+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '107', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 107, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:34.314899 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-17T03:49:48.851716+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '115', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 115, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:20:34.314925 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-17T03:49:57.739154+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '199', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 199, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:22:09.038208 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-17T03:50:06.313887+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '132', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 132, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:22:09.038345 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-17T03:50:14.686758+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '140', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 140, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-17 06:22:09.038386 | orchestrator | 2026-02-17 06:22:09.038400 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-17 06:22:09.038430 | orchestrator | Tuesday 17 February 2026 06:20:34 +0000 (0:00:02.934) 0:33:49.576 ****** 2026-02-17 06:22:09.038443 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:22:09.038455 | orchestrator | 2026-02-17 06:22:09.038466 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-17 06:22:09.038477 | orchestrator | Tuesday 17 February 2026 06:20:37 +0000 (0:00:02.928) 0:33:52.504 ****** 2026-02-17 06:22:09.038488 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-17 06:22:09.038501 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-17 06:22:09.038513 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-17 06:22:09.038524 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-17 06:22:09.038536 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-17 06:22:09.038548 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-17 06:22:09.038559 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-17 06:22:09.038570 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-17 06:22:09.038581 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-17 06:22:09.038592 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-17 06:22:09.038603 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-17 06:22:09.038614 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-17 06:22:09.038625 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-17 06:22:09.038636 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-17 06:22:09.038647 | orchestrator | 2026-02-17 06:22:09.038658 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-17 06:22:09.038671 | orchestrator | Tuesday 17 February 2026 06:21:51 +0000 (0:01:14.645) 0:35:07.150 ****** 2026-02-17 06:22:09.038684 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-17 06:22:09.038697 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-17 06:22:09.038712 | orchestrator | 2026-02-17 06:22:09.038751 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-17 06:22:09.038764 | orchestrator | 2026-02-17 06:22:09.038777 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:22:09.038790 | orchestrator | Tuesday 17 February 2026 06:21:57 +0000 (0:00:05.951) 0:35:13.102 ****** 2026-02-17 06:22:09.038809 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-17 06:22:09.038823 | orchestrator | 2026-02-17 06:22:09.038844 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:22:09.038857 | orchestrator | Tuesday 17 February 2026 06:21:59 +0000 (0:00:01.170) 0:35:14.273 ****** 2026-02-17 06:22:09.038871 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:09.038885 | orchestrator | 2026-02-17 06:22:09.038899 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:22:09.038912 | orchestrator | Tuesday 17 February 2026 06:22:00 +0000 (0:00:01.490) 0:35:15.763 ****** 2026-02-17 06:22:09.038925 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:09.038938 | orchestrator | 2026-02-17 06:22:09.038951 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:22:09.038964 | orchestrator | Tuesday 17 February 2026 06:22:01 +0000 (0:00:01.182) 0:35:16.946 ****** 2026-02-17 06:22:09.038977 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:09.038991 | orchestrator | 2026-02-17 06:22:09.039003 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:22:09.039017 | orchestrator | Tuesday 17 February 2026 06:22:03 +0000 (0:00:01.441) 0:35:18.387 ****** 2026-02-17 06:22:09.039031 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:09.039042 | orchestrator | 2026-02-17 06:22:09.039053 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:22:09.039065 | orchestrator | Tuesday 17 February 2026 06:22:04 +0000 (0:00:01.096) 0:35:19.484 ****** 2026-02-17 06:22:09.039076 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:09.039087 | orchestrator | 2026-02-17 06:22:09.039097 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:22:09.039109 | orchestrator | Tuesday 17 February 2026 06:22:05 +0000 (0:00:01.285) 0:35:20.769 ****** 2026-02-17 06:22:09.039120 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:09.039131 | orchestrator | 2026-02-17 06:22:09.039143 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:22:09.039154 | orchestrator | Tuesday 17 February 2026 06:22:06 +0000 (0:00:01.170) 0:35:21.939 ****** 2026-02-17 06:22:09.039165 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:09.039177 | orchestrator | 2026-02-17 06:22:09.039188 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:22:09.039199 | orchestrator | Tuesday 17 February 2026 06:22:07 +0000 (0:00:01.142) 0:35:23.082 ****** 2026-02-17 06:22:09.039210 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:09.039221 | orchestrator | 2026-02-17 06:22:09.039239 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:22:34.987192 | orchestrator | Tuesday 17 February 2026 06:22:09 +0000 (0:00:01.209) 0:35:24.292 ****** 2026-02-17 06:22:34.987303 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:22:34.987319 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:22:34.987330 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:22:34.987342 | orchestrator | 2026-02-17 06:22:34.987354 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:22:34.987366 | orchestrator | Tuesday 17 February 2026 06:22:11 +0000 (0:00:02.077) 0:35:26.369 ****** 2026-02-17 06:22:34.987377 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:34.987389 | orchestrator | 2026-02-17 06:22:34.987400 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:22:34.987411 | orchestrator | Tuesday 17 February 2026 06:22:12 +0000 (0:00:01.272) 0:35:27.642 ****** 2026-02-17 06:22:34.987422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:22:34.987433 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:22:34.987444 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:22:34.987455 | orchestrator | 2026-02-17 06:22:34.987465 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:22:34.987499 | orchestrator | Tuesday 17 February 2026 06:22:15 +0000 (0:00:03.402) 0:35:31.045 ****** 2026-02-17 06:22:34.987511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 06:22:34.987522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 06:22:34.987533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 06:22:34.987544 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:34.987554 | orchestrator | 2026-02-17 06:22:34.987565 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:22:34.987576 | orchestrator | Tuesday 17 February 2026 06:22:17 +0000 (0:00:01.855) 0:35:32.900 ****** 2026-02-17 06:22:34.987589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:22:34.987602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:22:34.987614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:22:34.987625 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:34.987636 | orchestrator | 2026-02-17 06:22:34.987647 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:22:34.987671 | orchestrator | Tuesday 17 February 2026 06:22:19 +0000 (0:00:02.085) 0:35:34.986 ****** 2026-02-17 06:22:34.987685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:34.987700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:34.987711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:34.987722 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:34.987761 | orchestrator | 2026-02-17 06:22:34.987774 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:22:34.987787 | orchestrator | Tuesday 17 February 2026 06:22:20 +0000 (0:00:01.233) 0:35:36.219 ****** 2026-02-17 06:22:34.987820 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:22:12.929077', 'end': '2026-02-17 06:22:12.974918', 'delta': '0:00:00.045841', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:22:34.987845 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:22:13.878275', 'end': '2026-02-17 06:22:13.925473', 'delta': '0:00:00.047198', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:22:34.987858 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:22:14.438179', 'end': '2026-02-17 06:22:14.484600', 'delta': '0:00:00.046421', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:22:34.987871 | orchestrator | 2026-02-17 06:22:34.987884 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:22:34.987898 | orchestrator | Tuesday 17 February 2026 06:22:22 +0000 (0:00:01.239) 0:35:37.459 ****** 2026-02-17 06:22:34.987910 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:34.987923 | orchestrator | 2026-02-17 06:22:34.987934 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:22:34.987951 | orchestrator | Tuesday 17 February 2026 06:22:23 +0000 (0:00:01.281) 0:35:38.740 ****** 2026-02-17 06:22:34.987962 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:34.987973 | orchestrator | 2026-02-17 06:22:34.987984 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:22:34.987994 | orchestrator | Tuesday 17 February 2026 06:22:24 +0000 (0:00:01.289) 0:35:40.030 ****** 2026-02-17 06:22:34.988005 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:34.988016 | orchestrator | 2026-02-17 06:22:34.988027 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:22:34.988038 | orchestrator | Tuesday 17 February 2026 06:22:25 +0000 (0:00:01.173) 0:35:41.203 ****** 2026-02-17 06:22:34.988049 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:22:34.988060 | orchestrator | 2026-02-17 06:22:34.988071 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:22:34.988081 | orchestrator | Tuesday 17 February 2026 06:22:27 +0000 (0:00:01.997) 0:35:43.201 ****** 2026-02-17 06:22:34.988092 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:34.988103 | orchestrator | 2026-02-17 06:22:34.988114 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:22:34.988125 | orchestrator | Tuesday 17 February 2026 06:22:29 +0000 (0:00:01.203) 0:35:44.405 ****** 2026-02-17 06:22:34.988135 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:34.988146 | orchestrator | 2026-02-17 06:22:34.988157 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:22:34.988169 | orchestrator | Tuesday 17 February 2026 06:22:30 +0000 (0:00:01.104) 0:35:45.509 ****** 2026-02-17 06:22:34.988180 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:34.988191 | orchestrator | 2026-02-17 06:22:34.988202 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:22:34.988219 | orchestrator | Tuesday 17 February 2026 06:22:31 +0000 (0:00:01.236) 0:35:46.746 ****** 2026-02-17 06:22:34.988230 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:34.988241 | orchestrator | 2026-02-17 06:22:34.988252 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:22:34.988263 | orchestrator | Tuesday 17 February 2026 06:22:32 +0000 (0:00:01.197) 0:35:47.944 ****** 2026-02-17 06:22:34.988274 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:34.988285 | orchestrator | 2026-02-17 06:22:34.988296 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:22:34.988307 | orchestrator | Tuesday 17 February 2026 06:22:33 +0000 (0:00:01.121) 0:35:49.065 ****** 2026-02-17 06:22:34.988324 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:39.930707 | orchestrator | 2026-02-17 06:22:39.930873 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:22:39.930890 | orchestrator | Tuesday 17 February 2026 06:22:34 +0000 (0:00:01.180) 0:35:50.246 ****** 2026-02-17 06:22:39.930901 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:39.930912 | orchestrator | 2026-02-17 06:22:39.930922 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:22:39.930932 | orchestrator | Tuesday 17 February 2026 06:22:36 +0000 (0:00:01.185) 0:35:51.431 ****** 2026-02-17 06:22:39.930942 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:39.930952 | orchestrator | 2026-02-17 06:22:39.930961 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:22:39.930971 | orchestrator | Tuesday 17 February 2026 06:22:37 +0000 (0:00:01.174) 0:35:52.605 ****** 2026-02-17 06:22:39.930981 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:39.930990 | orchestrator | 2026-02-17 06:22:39.931000 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:22:39.931011 | orchestrator | Tuesday 17 February 2026 06:22:38 +0000 (0:00:01.138) 0:35:53.743 ****** 2026-02-17 06:22:39.931021 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:22:39.931030 | orchestrator | 2026-02-17 06:22:39.931040 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:22:39.931049 | orchestrator | Tuesday 17 February 2026 06:22:39 +0000 (0:00:01.221) 0:35:54.965 ****** 2026-02-17 06:22:39.931061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:22:39.931075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'uuids': ['b2ca6990-5b39-46e1-9ab9-fa89aec205ee'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP']}})  2026-02-17 06:22:39.931104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce83e4f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:22:39.931139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f']}})  2026-02-17 06:22:39.931151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:22:39.931178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:22:39.931189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:22:39.931200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:22:39.931210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac', 'dm-uuid-CRYPT-LUKS2-edb3e2e5a632414f8a4f0db6f2dd266c-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:22:39.931220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:22:39.931236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'uuids': ['edb3e2e5-a632-414f-8a4f-0db6f2dd266c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac']}})  2026-02-17 06:22:39.931254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3']}})  2026-02-17 06:22:39.931274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:22:41.310164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d567a40', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:22:41.310346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:22:41.310377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:22:41.310398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP', 'dm-uuid-CRYPT-LUKS2-b2ca69905b3946e19ab9fa89aec205ee-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:22:41.310420 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:22:41.310440 | orchestrator | 2026-02-17 06:22:41.310460 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:22:41.310479 | orchestrator | Tuesday 17 February 2026 06:22:41 +0000 (0:00:01.350) 0:35:56.315 ****** 2026-02-17 06:22:41.310528 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:41.310550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'uuids': ['b2ca6990-5b39-46e1-9ab9-fa89aec205ee'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:41.310572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce83e4f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:41.310617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:41.310641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:41.310672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac', 'dm-uuid-CRYPT-LUKS2-edb3e2e5a632414f8a4f0db6f2dd266c-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496501 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'uuids': ['edb3e2e5-a632-414f-8a4f-0db6f2dd266c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d567a40', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:22:42.496626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:23:20.763962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP', 'dm-uuid-CRYPT-LUKS2-b2ca69905b3946e19ab9fa89aec205ee-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:23:20.764105 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764123 | orchestrator | 2026-02-17 06:23:20.764135 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:23:20.764148 | orchestrator | Tuesday 17 February 2026 06:22:42 +0000 (0:00:01.436) 0:35:57.752 ****** 2026-02-17 06:23:20.764159 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:23:20.764188 | orchestrator | 2026-02-17 06:23:20.764200 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:23:20.764211 | orchestrator | Tuesday 17 February 2026 06:22:44 +0000 (0:00:01.527) 0:35:59.280 ****** 2026-02-17 06:23:20.764234 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:23:20.764245 | orchestrator | 2026-02-17 06:23:20.764256 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:23:20.764266 | orchestrator | Tuesday 17 February 2026 06:22:45 +0000 (0:00:01.182) 0:36:00.462 ****** 2026-02-17 06:23:20.764277 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:23:20.764288 | orchestrator | 2026-02-17 06:23:20.764299 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:23:20.764324 | orchestrator | Tuesday 17 February 2026 06:22:46 +0000 (0:00:01.501) 0:36:01.964 ****** 2026-02-17 06:23:20.764336 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764347 | orchestrator | 2026-02-17 06:23:20.764358 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:23:20.764369 | orchestrator | Tuesday 17 February 2026 06:22:47 +0000 (0:00:01.158) 0:36:03.124 ****** 2026-02-17 06:23:20.764381 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764392 | orchestrator | 2026-02-17 06:23:20.764403 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:23:20.764414 | orchestrator | Tuesday 17 February 2026 06:22:49 +0000 (0:00:01.273) 0:36:04.398 ****** 2026-02-17 06:23:20.764425 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764436 | orchestrator | 2026-02-17 06:23:20.764447 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:23:20.764458 | orchestrator | Tuesday 17 February 2026 06:22:50 +0000 (0:00:01.127) 0:36:05.525 ****** 2026-02-17 06:23:20.764468 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-17 06:23:20.764481 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-17 06:23:20.764494 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-17 06:23:20.764506 | orchestrator | 2026-02-17 06:23:20.764519 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:23:20.764531 | orchestrator | Tuesday 17 February 2026 06:22:52 +0000 (0:00:02.080) 0:36:07.605 ****** 2026-02-17 06:23:20.764543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 06:23:20.764556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 06:23:20.764569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 06:23:20.764581 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764593 | orchestrator | 2026-02-17 06:23:20.764605 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:23:20.764618 | orchestrator | Tuesday 17 February 2026 06:22:53 +0000 (0:00:01.162) 0:36:08.768 ****** 2026-02-17 06:23:20.764631 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-17 06:23:20.764644 | orchestrator | 2026-02-17 06:23:20.764657 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:23:20.764671 | orchestrator | Tuesday 17 February 2026 06:22:54 +0000 (0:00:01.280) 0:36:10.049 ****** 2026-02-17 06:23:20.764684 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764696 | orchestrator | 2026-02-17 06:23:20.764709 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:23:20.764722 | orchestrator | Tuesday 17 February 2026 06:22:56 +0000 (0:00:01.236) 0:36:11.285 ****** 2026-02-17 06:23:20.764744 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764788 | orchestrator | 2026-02-17 06:23:20.764801 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:23:20.764814 | orchestrator | Tuesday 17 February 2026 06:22:57 +0000 (0:00:01.159) 0:36:12.445 ****** 2026-02-17 06:23:20.764826 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764839 | orchestrator | 2026-02-17 06:23:20.764850 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:23:20.764861 | orchestrator | Tuesday 17 February 2026 06:22:58 +0000 (0:00:01.156) 0:36:13.602 ****** 2026-02-17 06:23:20.764872 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:23:20.764883 | orchestrator | 2026-02-17 06:23:20.764894 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:23:20.764905 | orchestrator | Tuesday 17 February 2026 06:22:59 +0000 (0:00:01.239) 0:36:14.841 ****** 2026-02-17 06:23:20.764917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:23:20.764944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:23:20.764956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:23:20.764967 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.764978 | orchestrator | 2026-02-17 06:23:20.764989 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:23:20.765000 | orchestrator | Tuesday 17 February 2026 06:23:01 +0000 (0:00:01.446) 0:36:16.288 ****** 2026-02-17 06:23:20.765011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:23:20.765022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:23:20.765033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:23:20.765044 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.765056 | orchestrator | 2026-02-17 06:23:20.765066 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:23:20.765077 | orchestrator | Tuesday 17 February 2026 06:23:02 +0000 (0:00:01.461) 0:36:17.750 ****** 2026-02-17 06:23:20.765089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:23:20.765099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:23:20.765110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:23:20.765121 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:23:20.765132 | orchestrator | 2026-02-17 06:23:20.765143 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:23:20.765154 | orchestrator | Tuesday 17 February 2026 06:23:04 +0000 (0:00:01.522) 0:36:19.273 ****** 2026-02-17 06:23:20.765165 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:23:20.765176 | orchestrator | 2026-02-17 06:23:20.765187 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:23:20.765198 | orchestrator | Tuesday 17 February 2026 06:23:05 +0000 (0:00:01.215) 0:36:20.488 ****** 2026-02-17 06:23:20.765209 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 06:23:20.765219 | orchestrator | 2026-02-17 06:23:20.765231 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:23:20.765242 | orchestrator | Tuesday 17 February 2026 06:23:06 +0000 (0:00:01.418) 0:36:21.907 ****** 2026-02-17 06:23:20.765259 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:23:20.765270 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:23:20.765281 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:23:20.765292 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 06:23:20.765303 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:23:20.765314 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:23:20.765333 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:23:20.765344 | orchestrator | 2026-02-17 06:23:20.765355 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:23:20.765366 | orchestrator | Tuesday 17 February 2026 06:23:08 +0000 (0:00:02.216) 0:36:24.124 ****** 2026-02-17 06:23:20.765377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:23:20.765388 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:23:20.765399 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:23:20.765409 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 06:23:20.765420 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:23:20.765431 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:23:20.765442 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:23:20.765453 | orchestrator | 2026-02-17 06:23:20.765464 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-17 06:23:20.765475 | orchestrator | Tuesday 17 February 2026 06:23:11 +0000 (0:00:02.733) 0:36:26.858 ****** 2026-02-17 06:23:20.765486 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:23:20.765497 | orchestrator | 2026-02-17 06:23:20.765508 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-17 06:23:20.765519 | orchestrator | Tuesday 17 February 2026 06:23:13 +0000 (0:00:01.475) 0:36:28.333 ****** 2026-02-17 06:23:20.765530 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:23:20.765541 | orchestrator | 2026-02-17 06:23:20.765574 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-17 06:23:20.765586 | orchestrator | Tuesday 17 February 2026 06:23:14 +0000 (0:00:01.143) 0:36:29.477 ****** 2026-02-17 06:23:20.765597 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:23:20.765608 | orchestrator | 2026-02-17 06:23:20.765619 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-17 06:23:20.765630 | orchestrator | Tuesday 17 February 2026 06:23:15 +0000 (0:00:01.308) 0:36:30.786 ****** 2026-02-17 06:23:20.765641 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-17 06:23:20.765652 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-17 06:23:20.765663 | orchestrator | 2026-02-17 06:23:20.765674 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:23:20.765686 | orchestrator | Tuesday 17 February 2026 06:23:19 +0000 (0:00:04.070) 0:36:34.856 ****** 2026-02-17 06:23:20.765697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-17 06:23:20.765708 | orchestrator | 2026-02-17 06:23:20.765719 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:23:20.765737 | orchestrator | Tuesday 17 February 2026 06:23:20 +0000 (0:00:01.162) 0:36:36.019 ****** 2026-02-17 06:24:11.873467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-17 06:24:11.873585 | orchestrator | 2026-02-17 06:24:11.873601 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:24:11.873614 | orchestrator | Tuesday 17 February 2026 06:23:21 +0000 (0:00:01.157) 0:36:37.177 ****** 2026-02-17 06:24:11.873625 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.873637 | orchestrator | 2026-02-17 06:24:11.873648 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:24:11.873659 | orchestrator | Tuesday 17 February 2026 06:23:23 +0000 (0:00:01.153) 0:36:38.331 ****** 2026-02-17 06:24:11.873670 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.873682 | orchestrator | 2026-02-17 06:24:11.873693 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:24:11.873704 | orchestrator | Tuesday 17 February 2026 06:23:24 +0000 (0:00:01.466) 0:36:39.797 ****** 2026-02-17 06:24:11.873737 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.873749 | orchestrator | 2026-02-17 06:24:11.873760 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:24:11.873811 | orchestrator | Tuesday 17 February 2026 06:23:26 +0000 (0:00:01.527) 0:36:41.324 ****** 2026-02-17 06:24:11.873823 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.873833 | orchestrator | 2026-02-17 06:24:11.873844 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:24:11.873855 | orchestrator | Tuesday 17 February 2026 06:23:27 +0000 (0:00:01.572) 0:36:42.897 ****** 2026-02-17 06:24:11.873866 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.873876 | orchestrator | 2026-02-17 06:24:11.873888 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:24:11.873898 | orchestrator | Tuesday 17 February 2026 06:23:28 +0000 (0:00:01.200) 0:36:44.097 ****** 2026-02-17 06:24:11.873909 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.873920 | orchestrator | 2026-02-17 06:24:11.873932 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:24:11.873942 | orchestrator | Tuesday 17 February 2026 06:23:29 +0000 (0:00:01.139) 0:36:45.237 ****** 2026-02-17 06:24:11.873953 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.873964 | orchestrator | 2026-02-17 06:24:11.873990 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:24:11.874002 | orchestrator | Tuesday 17 February 2026 06:23:31 +0000 (0:00:01.161) 0:36:46.398 ****** 2026-02-17 06:24:11.874072 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.874089 | orchestrator | 2026-02-17 06:24:11.874101 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:24:11.874114 | orchestrator | Tuesday 17 February 2026 06:23:32 +0000 (0:00:01.539) 0:36:47.938 ****** 2026-02-17 06:24:11.874127 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.874139 | orchestrator | 2026-02-17 06:24:11.874152 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:24:11.874164 | orchestrator | Tuesday 17 February 2026 06:23:34 +0000 (0:00:01.523) 0:36:49.461 ****** 2026-02-17 06:24:11.874176 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874189 | orchestrator | 2026-02-17 06:24:11.874201 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:24:11.874214 | orchestrator | Tuesday 17 February 2026 06:23:35 +0000 (0:00:01.122) 0:36:50.584 ****** 2026-02-17 06:24:11.874226 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874238 | orchestrator | 2026-02-17 06:24:11.874251 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:24:11.874262 | orchestrator | Tuesday 17 February 2026 06:23:36 +0000 (0:00:01.104) 0:36:51.689 ****** 2026-02-17 06:24:11.874275 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.874287 | orchestrator | 2026-02-17 06:24:11.874300 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:24:11.874312 | orchestrator | Tuesday 17 February 2026 06:23:37 +0000 (0:00:01.175) 0:36:52.864 ****** 2026-02-17 06:24:11.874324 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.874336 | orchestrator | 2026-02-17 06:24:11.874348 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:24:11.874361 | orchestrator | Tuesday 17 February 2026 06:23:38 +0000 (0:00:01.214) 0:36:54.079 ****** 2026-02-17 06:24:11.874373 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.874387 | orchestrator | 2026-02-17 06:24:11.874400 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:24:11.874411 | orchestrator | Tuesday 17 February 2026 06:23:39 +0000 (0:00:01.178) 0:36:55.257 ****** 2026-02-17 06:24:11.874422 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874433 | orchestrator | 2026-02-17 06:24:11.874444 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:24:11.874455 | orchestrator | Tuesday 17 February 2026 06:23:41 +0000 (0:00:01.163) 0:36:56.421 ****** 2026-02-17 06:24:11.874474 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874485 | orchestrator | 2026-02-17 06:24:11.874496 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:24:11.874507 | orchestrator | Tuesday 17 February 2026 06:23:42 +0000 (0:00:01.131) 0:36:57.552 ****** 2026-02-17 06:24:11.874518 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874530 | orchestrator | 2026-02-17 06:24:11.874541 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:24:11.874552 | orchestrator | Tuesday 17 February 2026 06:23:43 +0000 (0:00:01.183) 0:36:58.735 ****** 2026-02-17 06:24:11.874563 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.874574 | orchestrator | 2026-02-17 06:24:11.874585 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:24:11.874596 | orchestrator | Tuesday 17 February 2026 06:23:44 +0000 (0:00:01.142) 0:36:59.878 ****** 2026-02-17 06:24:11.874606 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.874617 | orchestrator | 2026-02-17 06:24:11.874628 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:24:11.874639 | orchestrator | Tuesday 17 February 2026 06:23:45 +0000 (0:00:01.310) 0:37:01.188 ****** 2026-02-17 06:24:11.874650 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874661 | orchestrator | 2026-02-17 06:24:11.874689 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:24:11.874701 | orchestrator | Tuesday 17 February 2026 06:23:47 +0000 (0:00:01.207) 0:37:02.396 ****** 2026-02-17 06:24:11.874711 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874722 | orchestrator | 2026-02-17 06:24:11.874733 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:24:11.874744 | orchestrator | Tuesday 17 February 2026 06:23:48 +0000 (0:00:01.161) 0:37:03.558 ****** 2026-02-17 06:24:11.874755 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874766 | orchestrator | 2026-02-17 06:24:11.874797 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:24:11.874808 | orchestrator | Tuesday 17 February 2026 06:23:49 +0000 (0:00:01.146) 0:37:04.704 ****** 2026-02-17 06:24:11.874819 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874830 | orchestrator | 2026-02-17 06:24:11.874841 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:24:11.874851 | orchestrator | Tuesday 17 February 2026 06:23:50 +0000 (0:00:01.121) 0:37:05.826 ****** 2026-02-17 06:24:11.874862 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874872 | orchestrator | 2026-02-17 06:24:11.874883 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:24:11.874894 | orchestrator | Tuesday 17 February 2026 06:23:51 +0000 (0:00:01.117) 0:37:06.944 ****** 2026-02-17 06:24:11.874905 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874915 | orchestrator | 2026-02-17 06:24:11.874926 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:24:11.874937 | orchestrator | Tuesday 17 February 2026 06:23:52 +0000 (0:00:01.105) 0:37:08.049 ****** 2026-02-17 06:24:11.874947 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.874958 | orchestrator | 2026-02-17 06:24:11.874969 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:24:11.874980 | orchestrator | Tuesday 17 February 2026 06:23:53 +0000 (0:00:01.148) 0:37:09.198 ****** 2026-02-17 06:24:11.874991 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.875002 | orchestrator | 2026-02-17 06:24:11.875013 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:24:11.875030 | orchestrator | Tuesday 17 February 2026 06:23:55 +0000 (0:00:01.128) 0:37:10.326 ****** 2026-02-17 06:24:11.875041 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.875052 | orchestrator | 2026-02-17 06:24:11.875063 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:24:11.875073 | orchestrator | Tuesday 17 February 2026 06:23:56 +0000 (0:00:01.133) 0:37:11.459 ****** 2026-02-17 06:24:11.875093 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.875103 | orchestrator | 2026-02-17 06:24:11.875114 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:24:11.875125 | orchestrator | Tuesday 17 February 2026 06:23:57 +0000 (0:00:01.149) 0:37:12.609 ****** 2026-02-17 06:24:11.875136 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.875146 | orchestrator | 2026-02-17 06:24:11.875157 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:24:11.875168 | orchestrator | Tuesday 17 February 2026 06:23:58 +0000 (0:00:01.107) 0:37:13.717 ****** 2026-02-17 06:24:11.875179 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.875189 | orchestrator | 2026-02-17 06:24:11.875200 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:24:11.875211 | orchestrator | Tuesday 17 February 2026 06:23:59 +0000 (0:00:01.251) 0:37:14.969 ****** 2026-02-17 06:24:11.875221 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.875232 | orchestrator | 2026-02-17 06:24:11.875242 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:24:11.875253 | orchestrator | Tuesday 17 February 2026 06:24:01 +0000 (0:00:01.949) 0:37:16.918 ****** 2026-02-17 06:24:11.875264 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.875275 | orchestrator | 2026-02-17 06:24:11.875285 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:24:11.875296 | orchestrator | Tuesday 17 February 2026 06:24:03 +0000 (0:00:02.253) 0:37:19.171 ****** 2026-02-17 06:24:11.875307 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-17 06:24:11.875318 | orchestrator | 2026-02-17 06:24:11.875329 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:24:11.875340 | orchestrator | Tuesday 17 February 2026 06:24:05 +0000 (0:00:01.128) 0:37:20.300 ****** 2026-02-17 06:24:11.875350 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.875361 | orchestrator | 2026-02-17 06:24:11.875372 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:24:11.875383 | orchestrator | Tuesday 17 February 2026 06:24:06 +0000 (0:00:01.128) 0:37:21.429 ****** 2026-02-17 06:24:11.875394 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.875404 | orchestrator | 2026-02-17 06:24:11.875415 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:24:11.875426 | orchestrator | Tuesday 17 February 2026 06:24:07 +0000 (0:00:01.154) 0:37:22.583 ****** 2026-02-17 06:24:11.875437 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:24:11.875448 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:24:11.875458 | orchestrator | 2026-02-17 06:24:11.875469 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:24:11.875480 | orchestrator | Tuesday 17 February 2026 06:24:09 +0000 (0:00:01.951) 0:37:24.535 ****** 2026-02-17 06:24:11.875490 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:11.875501 | orchestrator | 2026-02-17 06:24:11.875512 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:24:11.875522 | orchestrator | Tuesday 17 February 2026 06:24:10 +0000 (0:00:01.433) 0:37:25.968 ****** 2026-02-17 06:24:11.875533 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:11.875544 | orchestrator | 2026-02-17 06:24:11.875555 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:24:11.875573 | orchestrator | Tuesday 17 February 2026 06:24:11 +0000 (0:00:01.149) 0:37:27.119 ****** 2026-02-17 06:24:59.367968 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368084 | orchestrator | 2026-02-17 06:24:59.368100 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:24:59.368114 | orchestrator | Tuesday 17 February 2026 06:24:13 +0000 (0:00:01.180) 0:37:28.299 ****** 2026-02-17 06:24:59.368125 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368160 | orchestrator | 2026-02-17 06:24:59.368172 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:24:59.368183 | orchestrator | Tuesday 17 February 2026 06:24:14 +0000 (0:00:01.099) 0:37:29.399 ****** 2026-02-17 06:24:59.368195 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-17 06:24:59.368207 | orchestrator | 2026-02-17 06:24:59.368218 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:24:59.368229 | orchestrator | Tuesday 17 February 2026 06:24:15 +0000 (0:00:01.317) 0:37:30.716 ****** 2026-02-17 06:24:59.368240 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:59.368252 | orchestrator | 2026-02-17 06:24:59.368263 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:24:59.368275 | orchestrator | Tuesday 17 February 2026 06:24:17 +0000 (0:00:01.791) 0:37:32.507 ****** 2026-02-17 06:24:59.368286 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:24:59.368296 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:24:59.368307 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:24:59.368318 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368329 | orchestrator | 2026-02-17 06:24:59.368340 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:24:59.368351 | orchestrator | Tuesday 17 February 2026 06:24:18 +0000 (0:00:01.192) 0:37:33.701 ****** 2026-02-17 06:24:59.368362 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368373 | orchestrator | 2026-02-17 06:24:59.368384 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:24:59.368409 | orchestrator | Tuesday 17 February 2026 06:24:19 +0000 (0:00:01.153) 0:37:34.854 ****** 2026-02-17 06:24:59.368421 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368432 | orchestrator | 2026-02-17 06:24:59.368443 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:24:59.368454 | orchestrator | Tuesday 17 February 2026 06:24:20 +0000 (0:00:01.216) 0:37:36.070 ****** 2026-02-17 06:24:59.368465 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368477 | orchestrator | 2026-02-17 06:24:59.368488 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:24:59.368501 | orchestrator | Tuesday 17 February 2026 06:24:21 +0000 (0:00:01.142) 0:37:37.213 ****** 2026-02-17 06:24:59.368513 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368526 | orchestrator | 2026-02-17 06:24:59.368538 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:24:59.368550 | orchestrator | Tuesday 17 February 2026 06:24:23 +0000 (0:00:01.171) 0:37:38.385 ****** 2026-02-17 06:24:59.368562 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368575 | orchestrator | 2026-02-17 06:24:59.368588 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:24:59.368600 | orchestrator | Tuesday 17 February 2026 06:24:24 +0000 (0:00:01.138) 0:37:39.523 ****** 2026-02-17 06:24:59.368612 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:59.368624 | orchestrator | 2026-02-17 06:24:59.368637 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:24:59.368649 | orchestrator | Tuesday 17 February 2026 06:24:26 +0000 (0:00:02.459) 0:37:41.983 ****** 2026-02-17 06:24:59.368662 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:59.368674 | orchestrator | 2026-02-17 06:24:59.368686 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:24:59.368698 | orchestrator | Tuesday 17 February 2026 06:24:27 +0000 (0:00:01.131) 0:37:43.114 ****** 2026-02-17 06:24:59.368711 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-17 06:24:59.368723 | orchestrator | 2026-02-17 06:24:59.368735 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:24:59.368755 | orchestrator | Tuesday 17 February 2026 06:24:29 +0000 (0:00:01.182) 0:37:44.297 ****** 2026-02-17 06:24:59.368767 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368780 | orchestrator | 2026-02-17 06:24:59.368868 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:24:59.368882 | orchestrator | Tuesday 17 February 2026 06:24:30 +0000 (0:00:01.193) 0:37:45.491 ****** 2026-02-17 06:24:59.368894 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368905 | orchestrator | 2026-02-17 06:24:59.368916 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:24:59.368927 | orchestrator | Tuesday 17 February 2026 06:24:31 +0000 (0:00:01.123) 0:37:46.614 ****** 2026-02-17 06:24:59.368938 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368950 | orchestrator | 2026-02-17 06:24:59.368961 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:24:59.368972 | orchestrator | Tuesday 17 February 2026 06:24:32 +0000 (0:00:01.215) 0:37:47.829 ****** 2026-02-17 06:24:59.368984 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.368994 | orchestrator | 2026-02-17 06:24:59.369005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:24:59.369016 | orchestrator | Tuesday 17 February 2026 06:24:33 +0000 (0:00:01.181) 0:37:49.010 ****** 2026-02-17 06:24:59.369027 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369038 | orchestrator | 2026-02-17 06:24:59.369049 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:24:59.369060 | orchestrator | Tuesday 17 February 2026 06:24:34 +0000 (0:00:01.149) 0:37:50.160 ****** 2026-02-17 06:24:59.369071 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369083 | orchestrator | 2026-02-17 06:24:59.369112 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:24:59.369123 | orchestrator | Tuesday 17 February 2026 06:24:36 +0000 (0:00:01.187) 0:37:51.347 ****** 2026-02-17 06:24:59.369134 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369145 | orchestrator | 2026-02-17 06:24:59.369156 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:24:59.369167 | orchestrator | Tuesday 17 February 2026 06:24:37 +0000 (0:00:01.154) 0:37:52.502 ****** 2026-02-17 06:24:59.369178 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369189 | orchestrator | 2026-02-17 06:24:59.369200 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:24:59.369211 | orchestrator | Tuesday 17 February 2026 06:24:38 +0000 (0:00:01.253) 0:37:53.756 ****** 2026-02-17 06:24:59.369221 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:24:59.369232 | orchestrator | 2026-02-17 06:24:59.369243 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:24:59.369254 | orchestrator | Tuesday 17 February 2026 06:24:39 +0000 (0:00:01.152) 0:37:54.908 ****** 2026-02-17 06:24:59.369265 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-17 06:24:59.369276 | orchestrator | 2026-02-17 06:24:59.369287 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:24:59.369297 | orchestrator | Tuesday 17 February 2026 06:24:40 +0000 (0:00:01.136) 0:37:56.044 ****** 2026-02-17 06:24:59.369308 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-17 06:24:59.369319 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-17 06:24:59.369330 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-17 06:24:59.369341 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-17 06:24:59.369352 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-17 06:24:59.369363 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-17 06:24:59.369374 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-17 06:24:59.369385 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:24:59.369402 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:24:59.369420 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:24:59.369431 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:24:59.369442 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:24:59.369453 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:24:59.369463 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:24:59.369474 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-17 06:24:59.369485 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-17 06:24:59.369496 | orchestrator | 2026-02-17 06:24:59.369506 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:24:59.369517 | orchestrator | Tuesday 17 February 2026 06:24:47 +0000 (0:00:06.511) 0:38:02.555 ****** 2026-02-17 06:24:59.369528 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-17 06:24:59.369539 | orchestrator | 2026-02-17 06:24:59.369550 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 06:24:59.369561 | orchestrator | Tuesday 17 February 2026 06:24:48 +0000 (0:00:01.619) 0:38:04.174 ****** 2026-02-17 06:24:59.369571 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:24:59.369584 | orchestrator | 2026-02-17 06:24:59.369594 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 06:24:59.369605 | orchestrator | Tuesday 17 February 2026 06:24:50 +0000 (0:00:01.538) 0:38:05.713 ****** 2026-02-17 06:24:59.369616 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:24:59.369627 | orchestrator | 2026-02-17 06:24:59.369637 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:24:59.369648 | orchestrator | Tuesday 17 February 2026 06:24:52 +0000 (0:00:01.980) 0:38:07.693 ****** 2026-02-17 06:24:59.369659 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369670 | orchestrator | 2026-02-17 06:24:59.369681 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:24:59.369692 | orchestrator | Tuesday 17 February 2026 06:24:53 +0000 (0:00:01.131) 0:38:08.825 ****** 2026-02-17 06:24:59.369702 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369713 | orchestrator | 2026-02-17 06:24:59.369724 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:24:59.369735 | orchestrator | Tuesday 17 February 2026 06:24:54 +0000 (0:00:01.116) 0:38:09.942 ****** 2026-02-17 06:24:59.369746 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369756 | orchestrator | 2026-02-17 06:24:59.369767 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:24:59.369779 | orchestrator | Tuesday 17 February 2026 06:24:55 +0000 (0:00:01.210) 0:38:11.153 ****** 2026-02-17 06:24:59.369808 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369819 | orchestrator | 2026-02-17 06:24:59.369830 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:24:59.369841 | orchestrator | Tuesday 17 February 2026 06:24:57 +0000 (0:00:01.144) 0:38:12.297 ****** 2026-02-17 06:24:59.369852 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369862 | orchestrator | 2026-02-17 06:24:59.369873 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:24:59.369884 | orchestrator | Tuesday 17 February 2026 06:24:58 +0000 (0:00:01.185) 0:38:13.482 ****** 2026-02-17 06:24:59.369895 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:24:59.369906 | orchestrator | 2026-02-17 06:24:59.369924 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:25:52.609945 | orchestrator | Tuesday 17 February 2026 06:24:59 +0000 (0:00:01.140) 0:38:14.623 ****** 2026-02-17 06:25:52.610143 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610163 | orchestrator | 2026-02-17 06:25:52.610176 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:25:52.610188 | orchestrator | Tuesday 17 February 2026 06:25:00 +0000 (0:00:01.165) 0:38:15.789 ****** 2026-02-17 06:25:52.610199 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610210 | orchestrator | 2026-02-17 06:25:52.610222 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:25:52.610233 | orchestrator | Tuesday 17 February 2026 06:25:01 +0000 (0:00:01.129) 0:38:16.918 ****** 2026-02-17 06:25:52.610244 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610255 | orchestrator | 2026-02-17 06:25:52.610265 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:25:52.610277 | orchestrator | Tuesday 17 February 2026 06:25:02 +0000 (0:00:01.153) 0:38:18.072 ****** 2026-02-17 06:25:52.610288 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610299 | orchestrator | 2026-02-17 06:25:52.610310 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:25:52.610321 | orchestrator | Tuesday 17 February 2026 06:25:03 +0000 (0:00:01.153) 0:38:19.226 ****** 2026-02-17 06:25:52.610332 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:25:52.610344 | orchestrator | 2026-02-17 06:25:52.610354 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:25:52.610365 | orchestrator | Tuesday 17 February 2026 06:25:05 +0000 (0:00:01.253) 0:38:20.479 ****** 2026-02-17 06:25:52.610376 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:25:52.610387 | orchestrator | 2026-02-17 06:25:52.610398 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:25:52.610409 | orchestrator | Tuesday 17 February 2026 06:25:09 +0000 (0:00:04.486) 0:38:24.966 ****** 2026-02-17 06:25:52.610435 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:25:52.610450 | orchestrator | 2026-02-17 06:25:52.610462 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:25:52.610475 | orchestrator | Tuesday 17 February 2026 06:25:10 +0000 (0:00:01.180) 0:38:26.147 ****** 2026-02-17 06:25:52.610491 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-17 06:25:52.610507 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-17 06:25:52.610521 | orchestrator | 2026-02-17 06:25:52.610533 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:25:52.610546 | orchestrator | Tuesday 17 February 2026 06:25:18 +0000 (0:00:07.650) 0:38:33.797 ****** 2026-02-17 06:25:52.610558 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610570 | orchestrator | 2026-02-17 06:25:52.610582 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:25:52.610595 | orchestrator | Tuesday 17 February 2026 06:25:19 +0000 (0:00:01.169) 0:38:34.966 ****** 2026-02-17 06:25:52.610607 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610619 | orchestrator | 2026-02-17 06:25:52.610631 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:25:52.610643 | orchestrator | Tuesday 17 February 2026 06:25:20 +0000 (0:00:01.164) 0:38:36.131 ****** 2026-02-17 06:25:52.610664 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610676 | orchestrator | 2026-02-17 06:25:52.610689 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:25:52.610701 | orchestrator | Tuesday 17 February 2026 06:25:22 +0000 (0:00:01.146) 0:38:37.278 ****** 2026-02-17 06:25:52.610713 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610725 | orchestrator | 2026-02-17 06:25:52.610738 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:25:52.610751 | orchestrator | Tuesday 17 February 2026 06:25:23 +0000 (0:00:01.155) 0:38:38.433 ****** 2026-02-17 06:25:52.610763 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610776 | orchestrator | 2026-02-17 06:25:52.610789 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:25:52.610801 | orchestrator | Tuesday 17 February 2026 06:25:24 +0000 (0:00:01.149) 0:38:39.583 ****** 2026-02-17 06:25:52.610831 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:25:52.610842 | orchestrator | 2026-02-17 06:25:52.610853 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:25:52.610864 | orchestrator | Tuesday 17 February 2026 06:25:25 +0000 (0:00:01.226) 0:38:40.810 ****** 2026-02-17 06:25:52.610875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:25:52.610886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:25:52.610897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:25:52.610908 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.610919 | orchestrator | 2026-02-17 06:25:52.610930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:25:52.610959 | orchestrator | Tuesday 17 February 2026 06:25:26 +0000 (0:00:01.395) 0:38:42.206 ****** 2026-02-17 06:25:52.610971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:25:52.610982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:25:52.610992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:25:52.611003 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.611014 | orchestrator | 2026-02-17 06:25:52.611025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:25:52.611036 | orchestrator | Tuesday 17 February 2026 06:25:28 +0000 (0:00:01.810) 0:38:44.017 ****** 2026-02-17 06:25:52.611047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:25:52.611057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:25:52.611068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:25:52.611079 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.611089 | orchestrator | 2026-02-17 06:25:52.611100 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:25:52.611111 | orchestrator | Tuesday 17 February 2026 06:25:30 +0000 (0:00:01.838) 0:38:45.856 ****** 2026-02-17 06:25:52.611122 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:25:52.611133 | orchestrator | 2026-02-17 06:25:52.611144 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:25:52.611155 | orchestrator | Tuesday 17 February 2026 06:25:31 +0000 (0:00:01.297) 0:38:47.153 ****** 2026-02-17 06:25:52.611166 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 06:25:52.611177 | orchestrator | 2026-02-17 06:25:52.611187 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:25:52.611198 | orchestrator | Tuesday 17 February 2026 06:25:33 +0000 (0:00:01.408) 0:38:48.561 ****** 2026-02-17 06:25:52.611209 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:25:52.611220 | orchestrator | 2026-02-17 06:25:52.611231 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-17 06:25:52.611242 | orchestrator | Tuesday 17 February 2026 06:25:35 +0000 (0:00:01.790) 0:38:50.352 ****** 2026-02-17 06:25:52.611253 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:25:52.611271 | orchestrator | 2026-02-17 06:25:52.611282 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:25:52.611293 | orchestrator | Tuesday 17 February 2026 06:25:36 +0000 (0:00:01.138) 0:38:51.491 ****** 2026-02-17 06:25:52.611303 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:25:52.611401 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:25:52.611420 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:25:52.611431 | orchestrator | 2026-02-17 06:25:52.611442 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-17 06:25:52.611453 | orchestrator | Tuesday 17 February 2026 06:25:37 +0000 (0:00:01.771) 0:38:53.263 ****** 2026-02-17 06:25:52.611480 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-17 06:25:52.611503 | orchestrator | 2026-02-17 06:25:52.611514 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-17 06:25:52.611525 | orchestrator | Tuesday 17 February 2026 06:25:39 +0000 (0:00:01.492) 0:38:54.756 ****** 2026-02-17 06:25:52.611535 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.611546 | orchestrator | 2026-02-17 06:25:52.611557 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-17 06:25:52.611568 | orchestrator | Tuesday 17 February 2026 06:25:40 +0000 (0:00:01.176) 0:38:55.932 ****** 2026-02-17 06:25:52.611579 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.611590 | orchestrator | 2026-02-17 06:25:52.611601 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-17 06:25:52.611612 | orchestrator | Tuesday 17 February 2026 06:25:41 +0000 (0:00:01.189) 0:38:57.122 ****** 2026-02-17 06:25:52.611622 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:25:52.611633 | orchestrator | 2026-02-17 06:25:52.611644 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-17 06:25:52.611655 | orchestrator | Tuesday 17 February 2026 06:25:43 +0000 (0:00:01.436) 0:38:58.558 ****** 2026-02-17 06:25:52.611666 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:25:52.611677 | orchestrator | 2026-02-17 06:25:52.611688 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-17 06:25:52.611698 | orchestrator | Tuesday 17 February 2026 06:25:44 +0000 (0:00:01.212) 0:38:59.771 ****** 2026-02-17 06:25:52.611709 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-17 06:25:52.611720 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-17 06:25:52.611731 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-17 06:25:52.611742 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-17 06:25:52.611753 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-17 06:25:52.611764 | orchestrator | 2026-02-17 06:25:52.611774 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-17 06:25:52.611785 | orchestrator | Tuesday 17 February 2026 06:25:49 +0000 (0:00:05.449) 0:39:05.221 ****** 2026-02-17 06:25:52.611796 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:25:52.611822 | orchestrator | 2026-02-17 06:25:52.611834 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-17 06:25:52.611845 | orchestrator | Tuesday 17 February 2026 06:25:51 +0000 (0:00:01.155) 0:39:06.376 ****** 2026-02-17 06:25:52.611856 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-17 06:25:52.611867 | orchestrator | 2026-02-17 06:25:52.611878 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-17 06:26:58.483941 | orchestrator | Tuesday 17 February 2026 06:25:52 +0000 (0:00:01.489) 0:39:07.866 ****** 2026-02-17 06:26:58.484080 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-17 06:26:58.484140 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-17 06:26:58.484164 | orchestrator | 2026-02-17 06:26:58.484185 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-17 06:26:58.484204 | orchestrator | Tuesday 17 February 2026 06:25:54 +0000 (0:00:01.855) 0:39:09.722 ****** 2026-02-17 06:26:58.484220 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:26:58.484231 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 06:26:58.484242 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:26:58.484253 | orchestrator | 2026-02-17 06:26:58.484265 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:26:58.484276 | orchestrator | Tuesday 17 February 2026 06:25:57 +0000 (0:00:03.205) 0:39:12.927 ****** 2026-02-17 06:26:58.484287 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-17 06:26:58.484298 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 06:26:58.484309 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:26:58.484320 | orchestrator | 2026-02-17 06:26:58.484331 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-17 06:26:58.484342 | orchestrator | Tuesday 17 February 2026 06:25:59 +0000 (0:00:01.998) 0:39:14.925 ****** 2026-02-17 06:26:58.484352 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.484363 | orchestrator | 2026-02-17 06:26:58.484374 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-17 06:26:58.484385 | orchestrator | Tuesday 17 February 2026 06:26:00 +0000 (0:00:01.278) 0:39:16.203 ****** 2026-02-17 06:26:58.484396 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.484406 | orchestrator | 2026-02-17 06:26:58.484418 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-17 06:26:58.484444 | orchestrator | Tuesday 17 February 2026 06:26:02 +0000 (0:00:01.237) 0:39:17.441 ****** 2026-02-17 06:26:58.484457 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.484469 | orchestrator | 2026-02-17 06:26:58.484482 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-17 06:26:58.484494 | orchestrator | Tuesday 17 February 2026 06:26:03 +0000 (0:00:01.172) 0:39:18.613 ****** 2026-02-17 06:26:58.484506 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-17 06:26:58.484520 | orchestrator | 2026-02-17 06:26:58.484533 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-17 06:26:58.484545 | orchestrator | Tuesday 17 February 2026 06:26:04 +0000 (0:00:01.480) 0:39:20.094 ****** 2026-02-17 06:26:58.484557 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:26:58.484570 | orchestrator | 2026-02-17 06:26:58.484582 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-17 06:26:58.484595 | orchestrator | Tuesday 17 February 2026 06:26:06 +0000 (0:00:01.534) 0:39:21.629 ****** 2026-02-17 06:26:58.484607 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:26:58.484619 | orchestrator | 2026-02-17 06:26:58.484632 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-17 06:26:58.484644 | orchestrator | Tuesday 17 February 2026 06:26:10 +0000 (0:00:03.891) 0:39:25.521 ****** 2026-02-17 06:26:58.484656 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-17 06:26:58.484668 | orchestrator | 2026-02-17 06:26:58.484680 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-17 06:26:58.484692 | orchestrator | Tuesday 17 February 2026 06:26:11 +0000 (0:00:01.492) 0:39:27.014 ****** 2026-02-17 06:26:58.484704 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:26:58.484716 | orchestrator | 2026-02-17 06:26:58.484728 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-17 06:26:58.484740 | orchestrator | Tuesday 17 February 2026 06:26:13 +0000 (0:00:01.995) 0:39:29.010 ****** 2026-02-17 06:26:58.484752 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:26:58.484764 | orchestrator | 2026-02-17 06:26:58.484781 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-17 06:26:58.484819 | orchestrator | Tuesday 17 February 2026 06:26:15 +0000 (0:00:02.000) 0:39:31.011 ****** 2026-02-17 06:26:58.484879 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:26:58.484898 | orchestrator | 2026-02-17 06:26:58.484916 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-17 06:26:58.484933 | orchestrator | Tuesday 17 February 2026 06:26:17 +0000 (0:00:02.217) 0:39:33.228 ****** 2026-02-17 06:26:58.484950 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.484969 | orchestrator | 2026-02-17 06:26:58.484986 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-17 06:26:58.485005 | orchestrator | Tuesday 17 February 2026 06:26:19 +0000 (0:00:01.140) 0:39:34.369 ****** 2026-02-17 06:26:58.485016 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.485027 | orchestrator | 2026-02-17 06:26:58.485038 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-17 06:26:58.485048 | orchestrator | Tuesday 17 February 2026 06:26:20 +0000 (0:00:01.174) 0:39:35.543 ****** 2026-02-17 06:26:58.485059 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 06:26:58.485070 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-17 06:26:58.485081 | orchestrator | 2026-02-17 06:26:58.485092 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-17 06:26:58.485103 | orchestrator | Tuesday 17 February 2026 06:26:22 +0000 (0:00:01.819) 0:39:37.362 ****** 2026-02-17 06:26:58.485113 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 06:26:58.485124 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-17 06:26:58.485135 | orchestrator | 2026-02-17 06:26:58.485146 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-17 06:26:58.485157 | orchestrator | Tuesday 17 February 2026 06:26:24 +0000 (0:00:02.853) 0:39:40.215 ****** 2026-02-17 06:26:58.485168 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-17 06:26:58.485200 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-17 06:26:58.485211 | orchestrator | 2026-02-17 06:26:58.485222 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-17 06:26:58.485233 | orchestrator | Tuesday 17 February 2026 06:26:29 +0000 (0:00:04.736) 0:39:44.952 ****** 2026-02-17 06:26:58.485244 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.485255 | orchestrator | 2026-02-17 06:26:58.485266 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-17 06:26:58.485277 | orchestrator | Tuesday 17 February 2026 06:26:30 +0000 (0:00:01.253) 0:39:46.206 ****** 2026-02-17 06:26:58.485288 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.485299 | orchestrator | 2026-02-17 06:26:58.485310 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-17 06:26:58.485320 | orchestrator | Tuesday 17 February 2026 06:26:32 +0000 (0:00:01.221) 0:39:47.428 ****** 2026-02-17 06:26:58.485331 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.485342 | orchestrator | 2026-02-17 06:26:58.485353 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-17 06:26:58.485364 | orchestrator | Tuesday 17 February 2026 06:26:33 +0000 (0:00:01.760) 0:39:49.189 ****** 2026-02-17 06:26:58.485375 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.485386 | orchestrator | 2026-02-17 06:26:58.485396 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-17 06:26:58.485408 | orchestrator | Tuesday 17 February 2026 06:26:35 +0000 (0:00:01.155) 0:39:50.344 ****** 2026-02-17 06:26:58.485418 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:26:58.485429 | orchestrator | 2026-02-17 06:26:58.485440 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-17 06:26:58.485451 | orchestrator | Tuesday 17 February 2026 06:26:36 +0000 (0:00:01.174) 0:39:51.521 ****** 2026-02-17 06:26:58.485462 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-17 06:26:58.485482 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-17 06:26:58.485502 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:26:58.485513 | orchestrator | 2026-02-17 06:26:58.485524 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-17 06:26:58.485535 | orchestrator | 2026-02-17 06:26:58.485546 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:26:58.485557 | orchestrator | Tuesday 17 February 2026 06:26:44 +0000 (0:00:07.978) 0:39:59.500 ****** 2026-02-17 06:26:58.485567 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-17 06:26:58.485578 | orchestrator | 2026-02-17 06:26:58.485589 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:26:58.485600 | orchestrator | Tuesday 17 February 2026 06:26:45 +0000 (0:00:01.192) 0:40:00.692 ****** 2026-02-17 06:26:58.485610 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:26:58.485621 | orchestrator | 2026-02-17 06:26:58.485632 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:26:58.485643 | orchestrator | Tuesday 17 February 2026 06:26:46 +0000 (0:00:01.524) 0:40:02.217 ****** 2026-02-17 06:26:58.485654 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:26:58.485665 | orchestrator | 2026-02-17 06:26:58.485676 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:26:58.485687 | orchestrator | Tuesday 17 February 2026 06:26:48 +0000 (0:00:01.167) 0:40:03.384 ****** 2026-02-17 06:26:58.485698 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:26:58.485709 | orchestrator | 2026-02-17 06:26:58.485719 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:26:58.485730 | orchestrator | Tuesday 17 February 2026 06:26:49 +0000 (0:00:01.490) 0:40:04.875 ****** 2026-02-17 06:26:58.485741 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:26:58.485752 | orchestrator | 2026-02-17 06:26:58.485763 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:26:58.485774 | orchestrator | Tuesday 17 February 2026 06:26:50 +0000 (0:00:01.167) 0:40:06.043 ****** 2026-02-17 06:26:58.485784 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:26:58.485795 | orchestrator | 2026-02-17 06:26:58.485806 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:26:58.485817 | orchestrator | Tuesday 17 February 2026 06:26:51 +0000 (0:00:01.170) 0:40:07.213 ****** 2026-02-17 06:26:58.485827 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:26:58.485881 | orchestrator | 2026-02-17 06:26:58.485901 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:26:58.485919 | orchestrator | Tuesday 17 February 2026 06:26:53 +0000 (0:00:01.198) 0:40:08.411 ****** 2026-02-17 06:26:58.485933 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:26:58.485944 | orchestrator | 2026-02-17 06:26:58.485955 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:26:58.485966 | orchestrator | Tuesday 17 February 2026 06:26:54 +0000 (0:00:01.166) 0:40:09.577 ****** 2026-02-17 06:26:58.485977 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:26:58.485987 | orchestrator | 2026-02-17 06:26:58.485998 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:26:58.486009 | orchestrator | Tuesday 17 February 2026 06:26:55 +0000 (0:00:01.179) 0:40:10.756 ****** 2026-02-17 06:26:58.486092 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:26:58.486111 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:26:58.486129 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:26:58.486148 | orchestrator | 2026-02-17 06:26:58.486166 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:26:58.486185 | orchestrator | Tuesday 17 February 2026 06:26:57 +0000 (0:00:01.692) 0:40:12.449 ****** 2026-02-17 06:26:58.486204 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:26:58.486234 | orchestrator | 2026-02-17 06:26:58.486252 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:26:58.486283 | orchestrator | Tuesday 17 February 2026 06:26:58 +0000 (0:00:01.291) 0:40:13.740 ****** 2026-02-17 06:27:23.793371 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:27:23.793483 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:27:23.793499 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:27:23.793512 | orchestrator | 2026-02-17 06:27:23.793525 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:27:23.793537 | orchestrator | Tuesday 17 February 2026 06:27:01 +0000 (0:00:02.966) 0:40:16.706 ****** 2026-02-17 06:27:23.793549 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 06:27:23.793560 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 06:27:23.793571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 06:27:23.793582 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.793594 | orchestrator | 2026-02-17 06:27:23.793605 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:27:23.793617 | orchestrator | Tuesday 17 February 2026 06:27:02 +0000 (0:00:01.438) 0:40:18.145 ****** 2026-02-17 06:27:23.793630 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:27:23.793644 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:27:23.793672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:27:23.793684 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.793696 | orchestrator | 2026-02-17 06:27:23.793707 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:27:23.793718 | orchestrator | Tuesday 17 February 2026 06:27:04 +0000 (0:00:01.668) 0:40:19.814 ****** 2026-02-17 06:27:23.793731 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:23.793745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:23.793757 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:23.793769 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.793780 | orchestrator | 2026-02-17 06:27:23.793791 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:27:23.793824 | orchestrator | Tuesday 17 February 2026 06:27:05 +0000 (0:00:01.236) 0:40:21.051 ****** 2026-02-17 06:27:23.793839 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:26:58.976156', 'end': '2026-02-17 06:26:59.023336', 'delta': '0:00:00.047180', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:27:23.793896 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:26:59.566819', 'end': '2026-02-17 06:26:59.622012', 'delta': '0:00:00.055193', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:27:23.793918 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:27:00.090420', 'end': '2026-02-17 06:27:00.132226', 'delta': '0:00:00.041806', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:27:23.793932 | orchestrator | 2026-02-17 06:27:23.793944 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:27:23.793957 | orchestrator | Tuesday 17 February 2026 06:27:07 +0000 (0:00:01.254) 0:40:22.306 ****** 2026-02-17 06:27:23.793970 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:23.793983 | orchestrator | 2026-02-17 06:27:23.793996 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:27:23.794009 | orchestrator | Tuesday 17 February 2026 06:27:08 +0000 (0:00:01.291) 0:40:23.598 ****** 2026-02-17 06:27:23.794091 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.794104 | orchestrator | 2026-02-17 06:27:23.794116 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:27:23.794128 | orchestrator | Tuesday 17 February 2026 06:27:09 +0000 (0:00:01.268) 0:40:24.866 ****** 2026-02-17 06:27:23.794140 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:23.794153 | orchestrator | 2026-02-17 06:27:23.794166 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:27:23.794178 | orchestrator | Tuesday 17 February 2026 06:27:10 +0000 (0:00:01.132) 0:40:25.999 ****** 2026-02-17 06:27:23.794191 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:27:23.794203 | orchestrator | 2026-02-17 06:27:23.794216 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:27:23.794228 | orchestrator | Tuesday 17 February 2026 06:27:13 +0000 (0:00:02.305) 0:40:28.304 ****** 2026-02-17 06:27:23.794251 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:23.794264 | orchestrator | 2026-02-17 06:27:23.794277 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:27:23.794289 | orchestrator | Tuesday 17 February 2026 06:27:14 +0000 (0:00:01.187) 0:40:29.491 ****** 2026-02-17 06:27:23.794300 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.794310 | orchestrator | 2026-02-17 06:27:23.794321 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:27:23.794332 | orchestrator | Tuesday 17 February 2026 06:27:15 +0000 (0:00:01.127) 0:40:30.619 ****** 2026-02-17 06:27:23.794343 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.794354 | orchestrator | 2026-02-17 06:27:23.794365 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:27:23.794376 | orchestrator | Tuesday 17 February 2026 06:27:16 +0000 (0:00:01.248) 0:40:31.867 ****** 2026-02-17 06:27:23.794387 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.794398 | orchestrator | 2026-02-17 06:27:23.794409 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:27:23.794420 | orchestrator | Tuesday 17 February 2026 06:27:17 +0000 (0:00:01.161) 0:40:33.029 ****** 2026-02-17 06:27:23.794431 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.794442 | orchestrator | 2026-02-17 06:27:23.794453 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:27:23.794464 | orchestrator | Tuesday 17 February 2026 06:27:18 +0000 (0:00:01.183) 0:40:34.212 ****** 2026-02-17 06:27:23.794475 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:23.794485 | orchestrator | 2026-02-17 06:27:23.794497 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:27:23.794508 | orchestrator | Tuesday 17 February 2026 06:27:20 +0000 (0:00:01.204) 0:40:35.416 ****** 2026-02-17 06:27:23.794519 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.794530 | orchestrator | 2026-02-17 06:27:23.794541 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:27:23.794552 | orchestrator | Tuesday 17 February 2026 06:27:21 +0000 (0:00:01.323) 0:40:36.740 ****** 2026-02-17 06:27:23.794563 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:23.794573 | orchestrator | 2026-02-17 06:27:23.794584 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:27:23.794595 | orchestrator | Tuesday 17 February 2026 06:27:22 +0000 (0:00:01.194) 0:40:37.935 ****** 2026-02-17 06:27:23.794606 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:23.794617 | orchestrator | 2026-02-17 06:27:23.794636 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:27:25.251001 | orchestrator | Tuesday 17 February 2026 06:27:23 +0000 (0:00:01.110) 0:40:39.046 ****** 2026-02-17 06:27:25.251123 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:25.251146 | orchestrator | 2026-02-17 06:27:25.251165 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:27:25.251181 | orchestrator | Tuesday 17 February 2026 06:27:24 +0000 (0:00:01.214) 0:40:40.261 ****** 2026-02-17 06:27:25.251200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:27:25.251245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'uuids': ['dab48e76-bd26-40e2-b056-8f58a903c67b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL']}})  2026-02-17 06:27:25.251300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd9c05b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:27:25.251318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b']}})  2026-02-17 06:27:25.251336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:27:25.251355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:27:25.251396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:27:25.251414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:27:25.251432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08', 'dm-uuid-CRYPT-LUKS2-40a19dfb08344771a8e6cfe7009b1e1d-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:27:25.251467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:27:25.251488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'uuids': ['40a19dfb-0834-4771-a8e6-cfe7009b1e1d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08']}})  2026-02-17 06:27:25.251509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0']}})  2026-02-17 06:27:25.251528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:27:25.251570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '95350bd6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:27:26.615082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:27:26.615183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:27:26.615201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL', 'dm-uuid-CRYPT-LUKS2-dab48e76bd2640e2b0568f58a903c67b-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:27:26.615216 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:26.615230 | orchestrator | 2026-02-17 06:27:26.615242 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:27:26.615254 | orchestrator | Tuesday 17 February 2026 06:27:26 +0000 (0:00:01.387) 0:40:41.648 ****** 2026-02-17 06:27:26.615266 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:26.615279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'uuids': ['dab48e76-bd26-40e2-b056-8f58a903c67b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:26.615329 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd9c05b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:26.615360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:26.615376 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:26.615388 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:26.615400 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:26.615412 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:26.615443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08', 'dm-uuid-CRYPT-LUKS2-40a19dfb08344771a8e6cfe7009b1e1d-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'uuids': ['40a19dfb-0834-4771-a8e6-cfe7009b1e1d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986495 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986581 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '95350bd6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL', 'dm-uuid-CRYPT-LUKS2-dab48e76bd2640e2b0568f58a903c67b-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:27:31.986645 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:27:31.986660 | orchestrator | 2026-02-17 06:27:31.986672 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:27:31.986685 | orchestrator | Tuesday 17 February 2026 06:27:27 +0000 (0:00:01.417) 0:40:43.065 ****** 2026-02-17 06:27:31.986697 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:31.986708 | orchestrator | 2026-02-17 06:27:31.986720 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:27:31.986737 | orchestrator | Tuesday 17 February 2026 06:27:29 +0000 (0:00:01.504) 0:40:44.570 ****** 2026-02-17 06:27:31.986748 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:31.986760 | orchestrator | 2026-02-17 06:27:31.986771 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:27:31.986782 | orchestrator | Tuesday 17 February 2026 06:27:30 +0000 (0:00:01.206) 0:40:45.776 ****** 2026-02-17 06:27:31.986793 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:27:31.986806 | orchestrator | 2026-02-17 06:27:31.986820 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:27:31.986841 | orchestrator | Tuesday 17 February 2026 06:27:31 +0000 (0:00:01.471) 0:40:47.248 ****** 2026-02-17 06:28:14.349203 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.349321 | orchestrator | 2026-02-17 06:28:14.349338 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:28:14.349351 | orchestrator | Tuesday 17 February 2026 06:27:33 +0000 (0:00:01.160) 0:40:48.408 ****** 2026-02-17 06:28:14.349363 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.349374 | orchestrator | 2026-02-17 06:28:14.349386 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:28:14.349397 | orchestrator | Tuesday 17 February 2026 06:27:34 +0000 (0:00:01.249) 0:40:49.658 ****** 2026-02-17 06:28:14.349409 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.349420 | orchestrator | 2026-02-17 06:28:14.349431 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:28:14.349443 | orchestrator | Tuesday 17 February 2026 06:27:35 +0000 (0:00:01.207) 0:40:50.865 ****** 2026-02-17 06:28:14.349454 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-17 06:28:14.349466 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-17 06:28:14.349477 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-17 06:28:14.349488 | orchestrator | 2026-02-17 06:28:14.349499 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:28:14.349510 | orchestrator | Tuesday 17 February 2026 06:27:37 +0000 (0:00:01.690) 0:40:52.557 ****** 2026-02-17 06:28:14.349521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 06:28:14.349533 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 06:28:14.349544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 06:28:14.349555 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.349566 | orchestrator | 2026-02-17 06:28:14.349577 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:28:14.349588 | orchestrator | Tuesday 17 February 2026 06:27:38 +0000 (0:00:01.216) 0:40:53.773 ****** 2026-02-17 06:28:14.349623 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-17 06:28:14.349636 | orchestrator | 2026-02-17 06:28:14.349648 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:28:14.349660 | orchestrator | Tuesday 17 February 2026 06:27:39 +0000 (0:00:01.172) 0:40:54.946 ****** 2026-02-17 06:28:14.349671 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.349682 | orchestrator | 2026-02-17 06:28:14.349693 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:28:14.349704 | orchestrator | Tuesday 17 February 2026 06:27:40 +0000 (0:00:01.209) 0:40:56.155 ****** 2026-02-17 06:28:14.349715 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.349726 | orchestrator | 2026-02-17 06:28:14.349737 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:28:14.349750 | orchestrator | Tuesday 17 February 2026 06:27:42 +0000 (0:00:01.174) 0:40:57.329 ****** 2026-02-17 06:28:14.349763 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.349776 | orchestrator | 2026-02-17 06:28:14.349789 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:28:14.349801 | orchestrator | Tuesday 17 February 2026 06:27:43 +0000 (0:00:01.161) 0:40:58.491 ****** 2026-02-17 06:28:14.349814 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:14.349827 | orchestrator | 2026-02-17 06:28:14.349840 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:28:14.349853 | orchestrator | Tuesday 17 February 2026 06:27:44 +0000 (0:00:01.228) 0:40:59.719 ****** 2026-02-17 06:28:14.349905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:28:14.349920 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:28:14.349932 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:28:14.349945 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.349957 | orchestrator | 2026-02-17 06:28:14.349969 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:28:14.349981 | orchestrator | Tuesday 17 February 2026 06:27:46 +0000 (0:00:01.937) 0:41:01.657 ****** 2026-02-17 06:28:14.349994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:28:14.350006 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:28:14.350074 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:28:14.350090 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.350103 | orchestrator | 2026-02-17 06:28:14.350116 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:28:14.350126 | orchestrator | Tuesday 17 February 2026 06:27:47 +0000 (0:00:01.403) 0:41:03.061 ****** 2026-02-17 06:28:14.350138 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:28:14.350149 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:28:14.350160 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:28:14.350171 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.350181 | orchestrator | 2026-02-17 06:28:14.350192 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:28:14.350203 | orchestrator | Tuesday 17 February 2026 06:27:49 +0000 (0:00:01.384) 0:41:04.446 ****** 2026-02-17 06:28:14.350214 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:14.350225 | orchestrator | 2026-02-17 06:28:14.350251 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:28:14.350263 | orchestrator | Tuesday 17 February 2026 06:27:50 +0000 (0:00:01.191) 0:41:05.638 ****** 2026-02-17 06:28:14.350274 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 06:28:14.350285 | orchestrator | 2026-02-17 06:28:14.350296 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:28:14.350308 | orchestrator | Tuesday 17 February 2026 06:27:51 +0000 (0:00:01.383) 0:41:07.022 ****** 2026-02-17 06:28:14.350346 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:28:14.350358 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:28:14.350369 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:28:14.350380 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:28:14.350391 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-17 06:28:14.350402 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:28:14.350413 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:28:14.350424 | orchestrator | 2026-02-17 06:28:14.350435 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:28:14.350446 | orchestrator | Tuesday 17 February 2026 06:27:53 +0000 (0:00:01.920) 0:41:08.942 ****** 2026-02-17 06:28:14.350457 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:28:14.350467 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:28:14.350478 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:28:14.350489 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:28:14.350500 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-17 06:28:14.350511 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:28:14.350522 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:28:14.350533 | orchestrator | 2026-02-17 06:28:14.350544 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-17 06:28:14.350555 | orchestrator | Tuesday 17 February 2026 06:27:55 +0000 (0:00:02.312) 0:41:11.255 ****** 2026-02-17 06:28:14.350566 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:14.350577 | orchestrator | 2026-02-17 06:28:14.350588 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-17 06:28:14.350599 | orchestrator | Tuesday 17 February 2026 06:27:57 +0000 (0:00:01.200) 0:41:12.456 ****** 2026-02-17 06:28:14.350609 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:14.350621 | orchestrator | 2026-02-17 06:28:14.350632 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-17 06:28:14.350643 | orchestrator | Tuesday 17 February 2026 06:27:57 +0000 (0:00:00.791) 0:41:13.248 ****** 2026-02-17 06:28:14.350653 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:14.350664 | orchestrator | 2026-02-17 06:28:14.350676 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-17 06:28:14.350687 | orchestrator | Tuesday 17 February 2026 06:27:58 +0000 (0:00:00.889) 0:41:14.137 ****** 2026-02-17 06:28:14.350697 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-17 06:28:14.350708 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-17 06:28:14.350720 | orchestrator | 2026-02-17 06:28:14.350730 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:28:14.350741 | orchestrator | Tuesday 17 February 2026 06:28:02 +0000 (0:00:03.750) 0:41:17.888 ****** 2026-02-17 06:28:14.350752 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-17 06:28:14.350764 | orchestrator | 2026-02-17 06:28:14.350775 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:28:14.350786 | orchestrator | Tuesday 17 February 2026 06:28:03 +0000 (0:00:01.308) 0:41:19.197 ****** 2026-02-17 06:28:14.350797 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-17 06:28:14.350808 | orchestrator | 2026-02-17 06:28:14.350819 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:28:14.350836 | orchestrator | Tuesday 17 February 2026 06:28:05 +0000 (0:00:01.152) 0:41:20.349 ****** 2026-02-17 06:28:14.350847 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.350858 | orchestrator | 2026-02-17 06:28:14.350893 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:28:14.350905 | orchestrator | Tuesday 17 February 2026 06:28:06 +0000 (0:00:01.162) 0:41:21.512 ****** 2026-02-17 06:28:14.350916 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:14.350927 | orchestrator | 2026-02-17 06:28:14.350938 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:28:14.350949 | orchestrator | Tuesday 17 February 2026 06:28:07 +0000 (0:00:01.528) 0:41:23.040 ****** 2026-02-17 06:28:14.350960 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:14.350971 | orchestrator | 2026-02-17 06:28:14.350982 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:28:14.350993 | orchestrator | Tuesday 17 February 2026 06:28:09 +0000 (0:00:01.526) 0:41:24.566 ****** 2026-02-17 06:28:14.351004 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:14.351015 | orchestrator | 2026-02-17 06:28:14.351026 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:28:14.351037 | orchestrator | Tuesday 17 February 2026 06:28:10 +0000 (0:00:01.634) 0:41:26.200 ****** 2026-02-17 06:28:14.351048 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.351059 | orchestrator | 2026-02-17 06:28:14.351076 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:28:14.351087 | orchestrator | Tuesday 17 February 2026 06:28:12 +0000 (0:00:01.160) 0:41:27.361 ****** 2026-02-17 06:28:14.351098 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.351109 | orchestrator | 2026-02-17 06:28:14.351120 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:28:14.351131 | orchestrator | Tuesday 17 February 2026 06:28:13 +0000 (0:00:01.124) 0:41:28.485 ****** 2026-02-17 06:28:14.351142 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:14.351153 | orchestrator | 2026-02-17 06:28:14.351171 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:28:54.971454 | orchestrator | Tuesday 17 February 2026 06:28:14 +0000 (0:00:01.117) 0:41:29.603 ****** 2026-02-17 06:28:54.971581 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.971599 | orchestrator | 2026-02-17 06:28:54.971612 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:28:54.971624 | orchestrator | Tuesday 17 February 2026 06:28:15 +0000 (0:00:01.529) 0:41:31.133 ****** 2026-02-17 06:28:54.971635 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.971646 | orchestrator | 2026-02-17 06:28:54.971659 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:28:54.971671 | orchestrator | Tuesday 17 February 2026 06:28:17 +0000 (0:00:01.555) 0:41:32.688 ****** 2026-02-17 06:28:54.971681 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.971693 | orchestrator | 2026-02-17 06:28:54.971704 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:28:54.971715 | orchestrator | Tuesday 17 February 2026 06:28:18 +0000 (0:00:00.797) 0:41:33.486 ****** 2026-02-17 06:28:54.971726 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.971737 | orchestrator | 2026-02-17 06:28:54.971748 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:28:54.971759 | orchestrator | Tuesday 17 February 2026 06:28:19 +0000 (0:00:00.839) 0:41:34.325 ****** 2026-02-17 06:28:54.971770 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.971781 | orchestrator | 2026-02-17 06:28:54.971792 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:28:54.971803 | orchestrator | Tuesday 17 February 2026 06:28:19 +0000 (0:00:00.810) 0:41:35.135 ****** 2026-02-17 06:28:54.971814 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.971825 | orchestrator | 2026-02-17 06:28:54.971836 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:28:54.971872 | orchestrator | Tuesday 17 February 2026 06:28:20 +0000 (0:00:00.845) 0:41:35.981 ****** 2026-02-17 06:28:54.971925 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.971937 | orchestrator | 2026-02-17 06:28:54.971947 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:28:54.971958 | orchestrator | Tuesday 17 February 2026 06:28:21 +0000 (0:00:00.810) 0:41:36.792 ****** 2026-02-17 06:28:54.971969 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.971980 | orchestrator | 2026-02-17 06:28:54.971992 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:28:54.972004 | orchestrator | Tuesday 17 February 2026 06:28:22 +0000 (0:00:00.772) 0:41:37.564 ****** 2026-02-17 06:28:54.972017 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972029 | orchestrator | 2026-02-17 06:28:54.972041 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:28:54.972054 | orchestrator | Tuesday 17 February 2026 06:28:23 +0000 (0:00:00.817) 0:41:38.382 ****** 2026-02-17 06:28:54.972066 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972078 | orchestrator | 2026-02-17 06:28:54.972091 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:28:54.972103 | orchestrator | Tuesday 17 February 2026 06:28:23 +0000 (0:00:00.758) 0:41:39.141 ****** 2026-02-17 06:28:54.972115 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.972128 | orchestrator | 2026-02-17 06:28:54.972140 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:28:54.972153 | orchestrator | Tuesday 17 February 2026 06:28:24 +0000 (0:00:00.911) 0:41:40.052 ****** 2026-02-17 06:28:54.972165 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.972178 | orchestrator | 2026-02-17 06:28:54.972190 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:28:54.972202 | orchestrator | Tuesday 17 February 2026 06:28:25 +0000 (0:00:00.818) 0:41:40.871 ****** 2026-02-17 06:28:54.972215 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972227 | orchestrator | 2026-02-17 06:28:54.972240 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:28:54.972253 | orchestrator | Tuesday 17 February 2026 06:28:26 +0000 (0:00:00.770) 0:41:41.641 ****** 2026-02-17 06:28:54.972265 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972277 | orchestrator | 2026-02-17 06:28:54.972290 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:28:54.972302 | orchestrator | Tuesday 17 February 2026 06:28:27 +0000 (0:00:00.780) 0:41:42.422 ****** 2026-02-17 06:28:54.972315 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972327 | orchestrator | 2026-02-17 06:28:54.972340 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:28:54.972352 | orchestrator | Tuesday 17 February 2026 06:28:27 +0000 (0:00:00.813) 0:41:43.236 ****** 2026-02-17 06:28:54.972363 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972374 | orchestrator | 2026-02-17 06:28:54.972385 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:28:54.972396 | orchestrator | Tuesday 17 February 2026 06:28:28 +0000 (0:00:00.864) 0:41:44.101 ****** 2026-02-17 06:28:54.972406 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972417 | orchestrator | 2026-02-17 06:28:54.972428 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:28:54.972439 | orchestrator | Tuesday 17 February 2026 06:28:29 +0000 (0:00:00.772) 0:41:44.873 ****** 2026-02-17 06:28:54.972450 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972461 | orchestrator | 2026-02-17 06:28:54.972472 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:28:54.972483 | orchestrator | Tuesday 17 February 2026 06:28:30 +0000 (0:00:00.900) 0:41:45.773 ****** 2026-02-17 06:28:54.972507 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972519 | orchestrator | 2026-02-17 06:28:54.972530 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:28:54.972550 | orchestrator | Tuesday 17 February 2026 06:28:31 +0000 (0:00:00.766) 0:41:46.540 ****** 2026-02-17 06:28:54.972561 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972572 | orchestrator | 2026-02-17 06:28:54.972583 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:28:54.972594 | orchestrator | Tuesday 17 February 2026 06:28:32 +0000 (0:00:00.774) 0:41:47.314 ****** 2026-02-17 06:28:54.972622 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972633 | orchestrator | 2026-02-17 06:28:54.972644 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:28:54.972666 | orchestrator | Tuesday 17 February 2026 06:28:32 +0000 (0:00:00.810) 0:41:48.125 ****** 2026-02-17 06:28:54.972677 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972688 | orchestrator | 2026-02-17 06:28:54.972699 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:28:54.972710 | orchestrator | Tuesday 17 February 2026 06:28:33 +0000 (0:00:00.796) 0:41:48.922 ****** 2026-02-17 06:28:54.972721 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972732 | orchestrator | 2026-02-17 06:28:54.972743 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:28:54.972754 | orchestrator | Tuesday 17 February 2026 06:28:34 +0000 (0:00:00.795) 0:41:49.717 ****** 2026-02-17 06:28:54.972765 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972776 | orchestrator | 2026-02-17 06:28:54.972787 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:28:54.972798 | orchestrator | Tuesday 17 February 2026 06:28:35 +0000 (0:00:00.919) 0:41:50.637 ****** 2026-02-17 06:28:54.972809 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.972820 | orchestrator | 2026-02-17 06:28:54.972831 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:28:54.972842 | orchestrator | Tuesday 17 February 2026 06:28:36 +0000 (0:00:01.588) 0:41:52.226 ****** 2026-02-17 06:28:54.972853 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.972864 | orchestrator | 2026-02-17 06:28:54.972875 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:28:54.972903 | orchestrator | Tuesday 17 February 2026 06:28:38 +0000 (0:00:01.821) 0:41:54.048 ****** 2026-02-17 06:28:54.972914 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-17 06:28:54.972925 | orchestrator | 2026-02-17 06:28:54.972937 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:28:54.972948 | orchestrator | Tuesday 17 February 2026 06:28:39 +0000 (0:00:01.161) 0:41:55.209 ****** 2026-02-17 06:28:54.972958 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.972969 | orchestrator | 2026-02-17 06:28:54.972980 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:28:54.972991 | orchestrator | Tuesday 17 February 2026 06:28:41 +0000 (0:00:01.275) 0:41:56.485 ****** 2026-02-17 06:28:54.973002 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.973013 | orchestrator | 2026-02-17 06:28:54.973024 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:28:54.973035 | orchestrator | Tuesday 17 February 2026 06:28:42 +0000 (0:00:01.153) 0:41:57.639 ****** 2026-02-17 06:28:54.973046 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:28:54.973057 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:28:54.973068 | orchestrator | 2026-02-17 06:28:54.973079 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:28:54.973090 | orchestrator | Tuesday 17 February 2026 06:28:44 +0000 (0:00:01.855) 0:41:59.494 ****** 2026-02-17 06:28:54.973101 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.973112 | orchestrator | 2026-02-17 06:28:54.973122 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:28:54.973134 | orchestrator | Tuesday 17 February 2026 06:28:45 +0000 (0:00:01.460) 0:42:00.954 ****** 2026-02-17 06:28:54.973151 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.973162 | orchestrator | 2026-02-17 06:28:54.973173 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:28:54.973184 | orchestrator | Tuesday 17 February 2026 06:28:46 +0000 (0:00:01.175) 0:42:02.129 ****** 2026-02-17 06:28:54.973195 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.973206 | orchestrator | 2026-02-17 06:28:54.973217 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:28:54.973228 | orchestrator | Tuesday 17 February 2026 06:28:47 +0000 (0:00:00.977) 0:42:03.107 ****** 2026-02-17 06:28:54.973239 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.973249 | orchestrator | 2026-02-17 06:28:54.973261 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:28:54.973272 | orchestrator | Tuesday 17 February 2026 06:28:48 +0000 (0:00:00.791) 0:42:03.898 ****** 2026-02-17 06:28:54.973283 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-17 06:28:54.973293 | orchestrator | 2026-02-17 06:28:54.973304 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:28:54.973315 | orchestrator | Tuesday 17 February 2026 06:28:49 +0000 (0:00:01.150) 0:42:05.049 ****** 2026-02-17 06:28:54.973326 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:28:54.973337 | orchestrator | 2026-02-17 06:28:54.973348 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:28:54.973359 | orchestrator | Tuesday 17 February 2026 06:28:51 +0000 (0:00:01.731) 0:42:06.780 ****** 2026-02-17 06:28:54.973370 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:28:54.973381 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:28:54.973398 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:28:54.973409 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.973420 | orchestrator | 2026-02-17 06:28:54.973431 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:28:54.973442 | orchestrator | Tuesday 17 February 2026 06:28:52 +0000 (0:00:01.151) 0:42:07.932 ****** 2026-02-17 06:28:54.973453 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:28:54.973464 | orchestrator | 2026-02-17 06:28:54.973475 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:28:54.973486 | orchestrator | Tuesday 17 February 2026 06:28:53 +0000 (0:00:01.100) 0:42:09.033 ****** 2026-02-17 06:28:54.973503 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.165589 | orchestrator | 2026-02-17 06:29:38.165703 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:29:38.165719 | orchestrator | Tuesday 17 February 2026 06:28:54 +0000 (0:00:01.195) 0:42:10.228 ****** 2026-02-17 06:29:38.165732 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.165744 | orchestrator | 2026-02-17 06:29:38.165755 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:29:38.165767 | orchestrator | Tuesday 17 February 2026 06:28:56 +0000 (0:00:01.253) 0:42:11.482 ****** 2026-02-17 06:29:38.165778 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.165789 | orchestrator | 2026-02-17 06:29:38.165800 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:29:38.165811 | orchestrator | Tuesday 17 February 2026 06:28:57 +0000 (0:00:01.176) 0:42:12.659 ****** 2026-02-17 06:29:38.165855 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.165867 | orchestrator | 2026-02-17 06:29:38.165878 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:29:38.165889 | orchestrator | Tuesday 17 February 2026 06:28:58 +0000 (0:00:00.869) 0:42:13.528 ****** 2026-02-17 06:29:38.165954 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:29:38.165967 | orchestrator | 2026-02-17 06:29:38.165980 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:29:38.166079 | orchestrator | Tuesday 17 February 2026 06:29:00 +0000 (0:00:02.156) 0:42:15.685 ****** 2026-02-17 06:29:38.166094 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:29:38.166105 | orchestrator | 2026-02-17 06:29:38.166116 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:29:38.166129 | orchestrator | Tuesday 17 February 2026 06:29:01 +0000 (0:00:00.810) 0:42:16.495 ****** 2026-02-17 06:29:38.166141 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-17 06:29:38.166154 | orchestrator | 2026-02-17 06:29:38.166172 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:29:38.166191 | orchestrator | Tuesday 17 February 2026 06:29:02 +0000 (0:00:01.149) 0:42:17.645 ****** 2026-02-17 06:29:38.166203 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.166216 | orchestrator | 2026-02-17 06:29:38.166228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:29:38.166240 | orchestrator | Tuesday 17 February 2026 06:29:03 +0000 (0:00:01.163) 0:42:18.809 ****** 2026-02-17 06:29:38.166252 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.166265 | orchestrator | 2026-02-17 06:29:38.166277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:29:38.166290 | orchestrator | Tuesday 17 February 2026 06:29:04 +0000 (0:00:01.143) 0:42:19.953 ****** 2026-02-17 06:29:38.166302 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.166314 | orchestrator | 2026-02-17 06:29:38.166327 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:29:38.166339 | orchestrator | Tuesday 17 February 2026 06:29:05 +0000 (0:00:01.186) 0:42:21.139 ****** 2026-02-17 06:29:38.166352 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.166364 | orchestrator | 2026-02-17 06:29:38.166377 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:29:38.166389 | orchestrator | Tuesday 17 February 2026 06:29:07 +0000 (0:00:01.164) 0:42:22.303 ****** 2026-02-17 06:29:38.166401 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.166413 | orchestrator | 2026-02-17 06:29:38.166425 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:29:38.166438 | orchestrator | Tuesday 17 February 2026 06:29:08 +0000 (0:00:01.149) 0:42:23.453 ****** 2026-02-17 06:29:38.166450 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.166462 | orchestrator | 2026-02-17 06:29:38.166474 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:29:38.166486 | orchestrator | Tuesday 17 February 2026 06:29:09 +0000 (0:00:01.155) 0:42:24.609 ****** 2026-02-17 06:29:38.166497 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.166508 | orchestrator | 2026-02-17 06:29:38.166519 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:29:38.166529 | orchestrator | Tuesday 17 February 2026 06:29:10 +0000 (0:00:01.220) 0:42:25.829 ****** 2026-02-17 06:29:38.166540 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.166551 | orchestrator | 2026-02-17 06:29:38.166562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:29:38.166573 | orchestrator | Tuesday 17 February 2026 06:29:11 +0000 (0:00:01.176) 0:42:27.006 ****** 2026-02-17 06:29:38.166584 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:29:38.166594 | orchestrator | 2026-02-17 06:29:38.166605 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:29:38.166616 | orchestrator | Tuesday 17 February 2026 06:29:12 +0000 (0:00:00.870) 0:42:27.876 ****** 2026-02-17 06:29:38.166627 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-17 06:29:38.166639 | orchestrator | 2026-02-17 06:29:38.166650 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:29:38.166661 | orchestrator | Tuesday 17 February 2026 06:29:13 +0000 (0:00:01.125) 0:42:29.002 ****** 2026-02-17 06:29:38.166681 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-17 06:29:38.166693 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-17 06:29:38.166719 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-17 06:29:38.166730 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-17 06:29:38.166741 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-17 06:29:38.166752 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-17 06:29:38.166763 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-17 06:29:38.166774 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:29:38.166785 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:29:38.166815 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:29:38.166827 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:29:38.166838 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:29:38.166849 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:29:38.166859 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:29:38.166870 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-17 06:29:38.166881 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-17 06:29:38.166892 | orchestrator | 2026-02-17 06:29:38.166958 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:29:38.166978 | orchestrator | Tuesday 17 February 2026 06:29:19 +0000 (0:00:06.210) 0:42:35.212 ****** 2026-02-17 06:29:38.166995 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-17 06:29:38.167013 | orchestrator | 2026-02-17 06:29:38.167024 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 06:29:38.167035 | orchestrator | Tuesday 17 February 2026 06:29:21 +0000 (0:00:01.128) 0:42:36.341 ****** 2026-02-17 06:29:38.167046 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:29:38.167058 | orchestrator | 2026-02-17 06:29:38.167069 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 06:29:38.167080 | orchestrator | Tuesday 17 February 2026 06:29:22 +0000 (0:00:01.525) 0:42:37.866 ****** 2026-02-17 06:29:38.167091 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:29:38.167101 | orchestrator | 2026-02-17 06:29:38.167112 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:29:38.167123 | orchestrator | Tuesday 17 February 2026 06:29:24 +0000 (0:00:01.658) 0:42:39.524 ****** 2026-02-17 06:29:38.167134 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167145 | orchestrator | 2026-02-17 06:29:38.167155 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:29:38.167166 | orchestrator | Tuesday 17 February 2026 06:29:25 +0000 (0:00:00.805) 0:42:40.330 ****** 2026-02-17 06:29:38.167177 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167188 | orchestrator | 2026-02-17 06:29:38.167199 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:29:38.167210 | orchestrator | Tuesday 17 February 2026 06:29:25 +0000 (0:00:00.778) 0:42:41.108 ****** 2026-02-17 06:29:38.167220 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167231 | orchestrator | 2026-02-17 06:29:38.167242 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:29:38.167253 | orchestrator | Tuesday 17 February 2026 06:29:26 +0000 (0:00:00.802) 0:42:41.910 ****** 2026-02-17 06:29:38.167264 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167275 | orchestrator | 2026-02-17 06:29:38.167286 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:29:38.167306 | orchestrator | Tuesday 17 February 2026 06:29:27 +0000 (0:00:00.810) 0:42:42.720 ****** 2026-02-17 06:29:38.167317 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167328 | orchestrator | 2026-02-17 06:29:38.167339 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:29:38.167350 | orchestrator | Tuesday 17 February 2026 06:29:28 +0000 (0:00:00.874) 0:42:43.595 ****** 2026-02-17 06:29:38.167361 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167372 | orchestrator | 2026-02-17 06:29:38.167383 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:29:38.167394 | orchestrator | Tuesday 17 February 2026 06:29:29 +0000 (0:00:00.771) 0:42:44.367 ****** 2026-02-17 06:29:38.167404 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167415 | orchestrator | 2026-02-17 06:29:38.167426 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:29:38.167437 | orchestrator | Tuesday 17 February 2026 06:29:29 +0000 (0:00:00.770) 0:42:45.138 ****** 2026-02-17 06:29:38.167447 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167458 | orchestrator | 2026-02-17 06:29:38.167469 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:29:38.167480 | orchestrator | Tuesday 17 February 2026 06:29:30 +0000 (0:00:00.880) 0:42:46.018 ****** 2026-02-17 06:29:38.167491 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167507 | orchestrator | 2026-02-17 06:29:38.167522 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:29:38.167534 | orchestrator | Tuesday 17 February 2026 06:29:31 +0000 (0:00:00.809) 0:42:46.828 ****** 2026-02-17 06:29:38.167545 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:29:38.167555 | orchestrator | 2026-02-17 06:29:38.167566 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:29:38.167577 | orchestrator | Tuesday 17 February 2026 06:29:32 +0000 (0:00:00.794) 0:42:47.623 ****** 2026-02-17 06:29:38.167588 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:29:38.167599 | orchestrator | 2026-02-17 06:29:38.167617 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:29:38.167628 | orchestrator | Tuesday 17 February 2026 06:29:33 +0000 (0:00:00.862) 0:42:48.486 ****** 2026-02-17 06:29:38.167640 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:29:38.167651 | orchestrator | 2026-02-17 06:29:38.167662 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:29:38.167673 | orchestrator | Tuesday 17 February 2026 06:29:37 +0000 (0:00:04.084) 0:42:52.570 ****** 2026-02-17 06:29:38.167692 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:30:19.686103 | orchestrator | 2026-02-17 06:30:19.686199 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:30:19.686212 | orchestrator | Tuesday 17 February 2026 06:29:38 +0000 (0:00:00.855) 0:42:53.426 ****** 2026-02-17 06:30:19.686223 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-17 06:30:19.686236 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-17 06:30:19.686245 | orchestrator | 2026-02-17 06:30:19.686254 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:30:19.686262 | orchestrator | Tuesday 17 February 2026 06:29:45 +0000 (0:00:07.058) 0:43:00.484 ****** 2026-02-17 06:30:19.686290 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686299 | orchestrator | 2026-02-17 06:30:19.686307 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:30:19.686315 | orchestrator | Tuesday 17 February 2026 06:29:45 +0000 (0:00:00.785) 0:43:01.269 ****** 2026-02-17 06:30:19.686323 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686331 | orchestrator | 2026-02-17 06:30:19.686340 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:30:19.686349 | orchestrator | Tuesday 17 February 2026 06:29:46 +0000 (0:00:00.893) 0:43:02.163 ****** 2026-02-17 06:30:19.686357 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686365 | orchestrator | 2026-02-17 06:30:19.686373 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:30:19.686381 | orchestrator | Tuesday 17 February 2026 06:29:47 +0000 (0:00:00.840) 0:43:03.003 ****** 2026-02-17 06:30:19.686389 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686397 | orchestrator | 2026-02-17 06:30:19.686405 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:30:19.686413 | orchestrator | Tuesday 17 February 2026 06:29:48 +0000 (0:00:00.804) 0:43:03.808 ****** 2026-02-17 06:30:19.686421 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686428 | orchestrator | 2026-02-17 06:30:19.686436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:30:19.686444 | orchestrator | Tuesday 17 February 2026 06:29:49 +0000 (0:00:00.881) 0:43:04.690 ****** 2026-02-17 06:30:19.686453 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:30:19.686461 | orchestrator | 2026-02-17 06:30:19.686469 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:30:19.686478 | orchestrator | Tuesday 17 February 2026 06:29:50 +0000 (0:00:00.936) 0:43:05.627 ****** 2026-02-17 06:30:19.686485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:30:19.686494 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:30:19.686502 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:30:19.686510 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686518 | orchestrator | 2026-02-17 06:30:19.686526 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:30:19.686534 | orchestrator | Tuesday 17 February 2026 06:29:51 +0000 (0:00:01.071) 0:43:06.698 ****** 2026-02-17 06:30:19.686542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:30:19.686550 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:30:19.686558 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:30:19.686566 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686574 | orchestrator | 2026-02-17 06:30:19.686582 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:30:19.686590 | orchestrator | Tuesday 17 February 2026 06:29:52 +0000 (0:00:01.081) 0:43:07.780 ****** 2026-02-17 06:30:19.686598 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:30:19.686606 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:30:19.686614 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:30:19.686622 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686630 | orchestrator | 2026-02-17 06:30:19.686638 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:30:19.686646 | orchestrator | Tuesday 17 February 2026 06:29:53 +0000 (0:00:01.098) 0:43:08.879 ****** 2026-02-17 06:30:19.686654 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:30:19.686662 | orchestrator | 2026-02-17 06:30:19.686670 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:30:19.686689 | orchestrator | Tuesday 17 February 2026 06:29:54 +0000 (0:00:00.841) 0:43:09.720 ****** 2026-02-17 06:30:19.686703 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 06:30:19.686712 | orchestrator | 2026-02-17 06:30:19.686720 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:30:19.686728 | orchestrator | Tuesday 17 February 2026 06:29:55 +0000 (0:00:01.101) 0:43:10.822 ****** 2026-02-17 06:30:19.686735 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:30:19.686743 | orchestrator | 2026-02-17 06:30:19.686751 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-17 06:30:19.686759 | orchestrator | Tuesday 17 February 2026 06:29:57 +0000 (0:00:01.488) 0:43:12.311 ****** 2026-02-17 06:30:19.686767 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:30:19.686775 | orchestrator | 2026-02-17 06:30:19.686809 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:30:19.686818 | orchestrator | Tuesday 17 February 2026 06:29:57 +0000 (0:00:00.786) 0:43:13.097 ****** 2026-02-17 06:30:19.686826 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:30:19.686843 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:30:19.686851 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:30:19.686859 | orchestrator | 2026-02-17 06:30:19.686867 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-17 06:30:19.686875 | orchestrator | Tuesday 17 February 2026 06:29:59 +0000 (0:00:01.760) 0:43:14.857 ****** 2026-02-17 06:30:19.686883 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-17 06:30:19.686891 | orchestrator | 2026-02-17 06:30:19.686916 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-17 06:30:19.686924 | orchestrator | Tuesday 17 February 2026 06:30:00 +0000 (0:00:01.140) 0:43:15.998 ****** 2026-02-17 06:30:19.686932 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686940 | orchestrator | 2026-02-17 06:30:19.686948 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-17 06:30:19.686956 | orchestrator | Tuesday 17 February 2026 06:30:01 +0000 (0:00:01.146) 0:43:17.145 ****** 2026-02-17 06:30:19.686964 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.686972 | orchestrator | 2026-02-17 06:30:19.686980 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-17 06:30:19.686988 | orchestrator | Tuesday 17 February 2026 06:30:02 +0000 (0:00:01.124) 0:43:18.270 ****** 2026-02-17 06:30:19.686996 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:30:19.687005 | orchestrator | 2026-02-17 06:30:19.687012 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-17 06:30:19.687020 | orchestrator | Tuesday 17 February 2026 06:30:04 +0000 (0:00:01.463) 0:43:19.734 ****** 2026-02-17 06:30:19.687028 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:30:19.687037 | orchestrator | 2026-02-17 06:30:19.687045 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-17 06:30:19.687053 | orchestrator | Tuesday 17 February 2026 06:30:05 +0000 (0:00:01.182) 0:43:20.917 ****** 2026-02-17 06:30:19.687061 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-17 06:30:19.687069 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-17 06:30:19.687077 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-17 06:30:19.687085 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-17 06:30:19.687093 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-17 06:30:19.687101 | orchestrator | 2026-02-17 06:30:19.687109 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-17 06:30:19.687117 | orchestrator | Tuesday 17 February 2026 06:30:08 +0000 (0:00:02.498) 0:43:23.416 ****** 2026-02-17 06:30:19.687125 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.687138 | orchestrator | 2026-02-17 06:30:19.687147 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-17 06:30:19.687154 | orchestrator | Tuesday 17 February 2026 06:30:08 +0000 (0:00:00.775) 0:43:24.191 ****** 2026-02-17 06:30:19.687162 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-17 06:30:19.687170 | orchestrator | 2026-02-17 06:30:19.687178 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-17 06:30:19.687186 | orchestrator | Tuesday 17 February 2026 06:30:10 +0000 (0:00:01.130) 0:43:25.322 ****** 2026-02-17 06:30:19.687194 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-17 06:30:19.687202 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-17 06:30:19.687210 | orchestrator | 2026-02-17 06:30:19.687218 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-17 06:30:19.687226 | orchestrator | Tuesday 17 February 2026 06:30:11 +0000 (0:00:01.888) 0:43:27.210 ****** 2026-02-17 06:30:19.687234 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:30:19.687242 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 06:30:19.687250 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:30:19.687258 | orchestrator | 2026-02-17 06:30:19.687266 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:30:19.687274 | orchestrator | Tuesday 17 February 2026 06:30:15 +0000 (0:00:03.501) 0:43:30.711 ****** 2026-02-17 06:30:19.687282 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-17 06:30:19.687290 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 06:30:19.687298 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:30:19.687306 | orchestrator | 2026-02-17 06:30:19.687314 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-17 06:30:19.687327 | orchestrator | Tuesday 17 February 2026 06:30:17 +0000 (0:00:01.676) 0:43:32.388 ****** 2026-02-17 06:30:19.687335 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.687343 | orchestrator | 2026-02-17 06:30:19.687351 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-17 06:30:19.687359 | orchestrator | Tuesday 17 February 2026 06:30:18 +0000 (0:00:00.950) 0:43:33.339 ****** 2026-02-17 06:30:19.687367 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.687375 | orchestrator | 2026-02-17 06:30:19.687383 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-17 06:30:19.687391 | orchestrator | Tuesday 17 February 2026 06:30:18 +0000 (0:00:00.810) 0:43:34.149 ****** 2026-02-17 06:30:19.687399 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:30:19.687407 | orchestrator | 2026-02-17 06:30:19.687420 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-17 06:31:23.815292 | orchestrator | Tuesday 17 February 2026 06:30:19 +0000 (0:00:00.795) 0:43:34.944 ****** 2026-02-17 06:31:23.815409 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-17 06:31:23.815425 | orchestrator | 2026-02-17 06:31:23.815438 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-17 06:31:23.815449 | orchestrator | Tuesday 17 February 2026 06:30:20 +0000 (0:00:01.168) 0:43:36.113 ****** 2026-02-17 06:31:23.815461 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:31:23.815473 | orchestrator | 2026-02-17 06:31:23.815484 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-17 06:31:23.815496 | orchestrator | Tuesday 17 February 2026 06:30:22 +0000 (0:00:01.482) 0:43:37.595 ****** 2026-02-17 06:31:23.815507 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:31:23.815518 | orchestrator | 2026-02-17 06:31:23.815529 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-17 06:31:23.815540 | orchestrator | Tuesday 17 February 2026 06:30:25 +0000 (0:00:03.421) 0:43:41.017 ****** 2026-02-17 06:31:23.815551 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-17 06:31:23.815586 | orchestrator | 2026-02-17 06:31:23.815661 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-17 06:31:23.815684 | orchestrator | Tuesday 17 February 2026 06:30:26 +0000 (0:00:01.105) 0:43:42.123 ****** 2026-02-17 06:31:23.815704 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:31:23.815724 | orchestrator | 2026-02-17 06:31:23.815741 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-17 06:31:23.815759 | orchestrator | Tuesday 17 February 2026 06:30:28 +0000 (0:00:02.018) 0:43:44.141 ****** 2026-02-17 06:31:23.815777 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:31:23.815793 | orchestrator | 2026-02-17 06:31:23.815808 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-17 06:31:23.815824 | orchestrator | Tuesday 17 February 2026 06:30:30 +0000 (0:00:01.899) 0:43:46.041 ****** 2026-02-17 06:31:23.815844 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:31:23.815864 | orchestrator | 2026-02-17 06:31:23.815885 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-17 06:31:23.815906 | orchestrator | Tuesday 17 February 2026 06:30:33 +0000 (0:00:02.246) 0:43:48.288 ****** 2026-02-17 06:31:23.815924 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:31:23.815938 | orchestrator | 2026-02-17 06:31:23.815950 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-17 06:31:23.815962 | orchestrator | Tuesday 17 February 2026 06:30:34 +0000 (0:00:01.160) 0:43:49.448 ****** 2026-02-17 06:31:23.815975 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:31:23.815987 | orchestrator | 2026-02-17 06:31:23.816000 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-17 06:31:23.816014 | orchestrator | Tuesday 17 February 2026 06:30:35 +0000 (0:00:01.159) 0:43:50.608 ****** 2026-02-17 06:31:23.816026 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-17 06:31:23.816039 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-17 06:31:23.816052 | orchestrator | 2026-02-17 06:31:23.816064 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-17 06:31:23.816076 | orchestrator | Tuesday 17 February 2026 06:30:37 +0000 (0:00:01.813) 0:43:52.421 ****** 2026-02-17 06:31:23.816089 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-17 06:31:23.816101 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-17 06:31:23.816113 | orchestrator | 2026-02-17 06:31:23.816125 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-17 06:31:23.816138 | orchestrator | Tuesday 17 February 2026 06:30:40 +0000 (0:00:02.903) 0:43:55.325 ****** 2026-02-17 06:31:23.816151 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-17 06:31:23.816163 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-17 06:31:23.816175 | orchestrator | 2026-02-17 06:31:23.816187 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-17 06:31:23.816200 | orchestrator | Tuesday 17 February 2026 06:30:44 +0000 (0:00:04.273) 0:43:59.599 ****** 2026-02-17 06:31:23.816212 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:31:23.816230 | orchestrator | 2026-02-17 06:31:23.816249 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-17 06:31:23.816268 | orchestrator | Tuesday 17 February 2026 06:30:45 +0000 (0:00:00.921) 0:44:00.520 ****** 2026-02-17 06:31:23.816285 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:31:23.816304 | orchestrator | 2026-02-17 06:31:23.816323 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-17 06:31:23.816342 | orchestrator | Tuesday 17 February 2026 06:30:46 +0000 (0:00:00.918) 0:44:01.439 ****** 2026-02-17 06:31:23.816361 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:31:23.816381 | orchestrator | 2026-02-17 06:31:23.816398 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-17 06:31:23.816416 | orchestrator | Tuesday 17 February 2026 06:30:47 +0000 (0:00:00.909) 0:44:02.349 ****** 2026-02-17 06:31:23.816434 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:31:23.816452 | orchestrator | 2026-02-17 06:31:23.816488 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-17 06:31:23.816508 | orchestrator | Tuesday 17 February 2026 06:30:47 +0000 (0:00:00.809) 0:44:03.159 ****** 2026-02-17 06:31:23.816538 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:31:23.816550 | orchestrator | 2026-02-17 06:31:23.816561 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-17 06:31:23.816572 | orchestrator | Tuesday 17 February 2026 06:30:48 +0000 (0:00:00.808) 0:44:03.968 ****** 2026-02-17 06:31:23.816583 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-17 06:31:23.816595 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-17 06:31:23.816651 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-17 06:31:23.816683 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-17 06:31:23.816695 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:31:23.816706 | orchestrator | 2026-02-17 06:31:23.816717 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-17 06:31:23.816728 | orchestrator | 2026-02-17 06:31:23.816739 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:31:23.816750 | orchestrator | Tuesday 17 February 2026 06:31:02 +0000 (0:00:13.883) 0:44:17.852 ****** 2026-02-17 06:31:23.816761 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-17 06:31:23.816772 | orchestrator | 2026-02-17 06:31:23.816783 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:31:23.816794 | orchestrator | Tuesday 17 February 2026 06:31:03 +0000 (0:00:01.326) 0:44:19.178 ****** 2026-02-17 06:31:23.816805 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:23.816816 | orchestrator | 2026-02-17 06:31:23.816827 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:31:23.816838 | orchestrator | Tuesday 17 February 2026 06:31:05 +0000 (0:00:01.430) 0:44:20.609 ****** 2026-02-17 06:31:23.816849 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:23.816860 | orchestrator | 2026-02-17 06:31:23.816871 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:31:23.816886 | orchestrator | Tuesday 17 February 2026 06:31:06 +0000 (0:00:01.129) 0:44:21.738 ****** 2026-02-17 06:31:23.816905 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:23.816923 | orchestrator | 2026-02-17 06:31:23.816940 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:31:23.816959 | orchestrator | Tuesday 17 February 2026 06:31:07 +0000 (0:00:01.452) 0:44:23.191 ****** 2026-02-17 06:31:23.816974 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:23.816992 | orchestrator | 2026-02-17 06:31:23.817009 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:31:23.817026 | orchestrator | Tuesday 17 February 2026 06:31:09 +0000 (0:00:01.183) 0:44:24.375 ****** 2026-02-17 06:31:23.817045 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:23.817064 | orchestrator | 2026-02-17 06:31:23.817084 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:31:23.817103 | orchestrator | Tuesday 17 February 2026 06:31:10 +0000 (0:00:01.159) 0:44:25.534 ****** 2026-02-17 06:31:23.817115 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:23.817126 | orchestrator | 2026-02-17 06:31:23.817137 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:31:23.817151 | orchestrator | Tuesday 17 February 2026 06:31:11 +0000 (0:00:01.165) 0:44:26.699 ****** 2026-02-17 06:31:23.817169 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:23.817187 | orchestrator | 2026-02-17 06:31:23.817205 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:31:23.817224 | orchestrator | Tuesday 17 February 2026 06:31:12 +0000 (0:00:01.187) 0:44:27.887 ****** 2026-02-17 06:31:23.817256 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:23.817276 | orchestrator | 2026-02-17 06:31:23.817294 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:31:23.817311 | orchestrator | Tuesday 17 February 2026 06:31:13 +0000 (0:00:01.121) 0:44:29.008 ****** 2026-02-17 06:31:23.817322 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:31:23.817333 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:31:23.817344 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:31:23.817354 | orchestrator | 2026-02-17 06:31:23.817365 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:31:23.817376 | orchestrator | Tuesday 17 February 2026 06:31:15 +0000 (0:00:02.082) 0:44:31.091 ****** 2026-02-17 06:31:23.817387 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:23.817397 | orchestrator | 2026-02-17 06:31:23.817408 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:31:23.817419 | orchestrator | Tuesday 17 February 2026 06:31:17 +0000 (0:00:01.333) 0:44:32.424 ****** 2026-02-17 06:31:23.817430 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:31:23.817440 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:31:23.817451 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:31:23.817462 | orchestrator | 2026-02-17 06:31:23.817472 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:31:23.817483 | orchestrator | Tuesday 17 February 2026 06:31:20 +0000 (0:00:03.231) 0:44:35.656 ****** 2026-02-17 06:31:23.817494 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 06:31:23.817505 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 06:31:23.817516 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 06:31:23.817527 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:23.817538 | orchestrator | 2026-02-17 06:31:23.817556 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:31:23.817568 | orchestrator | Tuesday 17 February 2026 06:31:22 +0000 (0:00:01.812) 0:44:37.469 ****** 2026-02-17 06:31:23.817581 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:31:23.817632 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:31:43.995788 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:31:43.995894 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.995909 | orchestrator | 2026-02-17 06:31:43.995921 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:31:43.995932 | orchestrator | Tuesday 17 February 2026 06:31:23 +0000 (0:00:01.604) 0:44:39.073 ****** 2026-02-17 06:31:43.995944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:43.995956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:43.995987 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:43.995998 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.996008 | orchestrator | 2026-02-17 06:31:43.996019 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:31:43.996029 | orchestrator | Tuesday 17 February 2026 06:31:24 +0000 (0:00:01.169) 0:44:40.243 ****** 2026-02-17 06:31:43.996041 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:31:18.023835', 'end': '2026-02-17 06:31:18.074846', 'delta': '0:00:00.051011', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:31:43.996054 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:31:18.591657', 'end': '2026-02-17 06:31:18.638697', 'delta': '0:00:00.047040', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:31:43.996094 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:31:19.155390', 'end': '2026-02-17 06:31:19.204683', 'delta': '0:00:00.049293', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:31:43.996106 | orchestrator | 2026-02-17 06:31:43.996116 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:31:43.996126 | orchestrator | Tuesday 17 February 2026 06:31:26 +0000 (0:00:01.248) 0:44:41.491 ****** 2026-02-17 06:31:43.996136 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:43.996147 | orchestrator | 2026-02-17 06:31:43.996156 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:31:43.996166 | orchestrator | Tuesday 17 February 2026 06:31:27 +0000 (0:00:01.262) 0:44:42.754 ****** 2026-02-17 06:31:43.996187 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.996197 | orchestrator | 2026-02-17 06:31:43.996206 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:31:43.996216 | orchestrator | Tuesday 17 February 2026 06:31:28 +0000 (0:00:01.265) 0:44:44.020 ****** 2026-02-17 06:31:43.996226 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:43.996236 | orchestrator | 2026-02-17 06:31:43.996246 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:31:43.996256 | orchestrator | Tuesday 17 February 2026 06:31:29 +0000 (0:00:01.132) 0:44:45.153 ****** 2026-02-17 06:31:43.996266 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:31:43.996275 | orchestrator | 2026-02-17 06:31:43.996285 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:31:43.996295 | orchestrator | Tuesday 17 February 2026 06:31:31 +0000 (0:00:02.030) 0:44:47.183 ****** 2026-02-17 06:31:43.996305 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:43.996314 | orchestrator | 2026-02-17 06:31:43.996324 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:31:43.996334 | orchestrator | Tuesday 17 February 2026 06:31:33 +0000 (0:00:01.181) 0:44:48.365 ****** 2026-02-17 06:31:43.996344 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.996353 | orchestrator | 2026-02-17 06:31:43.996363 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:31:43.996373 | orchestrator | Tuesday 17 February 2026 06:31:34 +0000 (0:00:01.122) 0:44:49.487 ****** 2026-02-17 06:31:43.996383 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.996393 | orchestrator | 2026-02-17 06:31:43.996402 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:31:43.996412 | orchestrator | Tuesday 17 February 2026 06:31:35 +0000 (0:00:01.266) 0:44:50.754 ****** 2026-02-17 06:31:43.996422 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.996432 | orchestrator | 2026-02-17 06:31:43.996441 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:31:43.996451 | orchestrator | Tuesday 17 February 2026 06:31:36 +0000 (0:00:01.179) 0:44:51.933 ****** 2026-02-17 06:31:43.996461 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.996471 | orchestrator | 2026-02-17 06:31:43.996480 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:31:43.996490 | orchestrator | Tuesday 17 February 2026 06:31:37 +0000 (0:00:01.142) 0:44:53.076 ****** 2026-02-17 06:31:43.996500 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:43.996509 | orchestrator | 2026-02-17 06:31:43.996556 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:31:43.996565 | orchestrator | Tuesday 17 February 2026 06:31:39 +0000 (0:00:01.350) 0:44:54.426 ****** 2026-02-17 06:31:43.996575 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.996585 | orchestrator | 2026-02-17 06:31:43.996595 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:31:43.996605 | orchestrator | Tuesday 17 February 2026 06:31:40 +0000 (0:00:01.107) 0:44:55.533 ****** 2026-02-17 06:31:43.996614 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:43.996624 | orchestrator | 2026-02-17 06:31:43.996634 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:31:43.996643 | orchestrator | Tuesday 17 February 2026 06:31:41 +0000 (0:00:01.175) 0:44:56.709 ****** 2026-02-17 06:31:43.996653 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:43.996663 | orchestrator | 2026-02-17 06:31:43.996673 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:31:43.996683 | orchestrator | Tuesday 17 February 2026 06:31:42 +0000 (0:00:01.138) 0:44:57.847 ****** 2026-02-17 06:31:43.996692 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:43.996702 | orchestrator | 2026-02-17 06:31:43.996712 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:31:43.996722 | orchestrator | Tuesday 17 February 2026 06:31:43 +0000 (0:00:01.184) 0:44:59.032 ****** 2026-02-17 06:31:43.996738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:31:43.996761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'uuids': ['4833064e-8ca1-479d-a0c0-581ea0d1065c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7']}})  2026-02-17 06:31:44.004611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b093f3ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:31:44.004670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee']}})  2026-02-17 06:31:44.004683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:31:44.004696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:31:44.004707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:31:44.004732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:31:44.004766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV', 'dm-uuid-CRYPT-LUKS2-f004f31e7c734e098d3470dc55158438-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:31:44.004789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:31:44.004801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'uuids': ['f004f31e-7c73-4e09-8d34-70dc55158438'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV']}})  2026-02-17 06:31:44.004812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841']}})  2026-02-17 06:31:44.004822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:31:44.004849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37d8f58a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:31:45.394852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:31:45.394960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:31:45.394978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7', 'dm-uuid-CRYPT-LUKS2-4833064e8ca1479da0c0581ea0d1065c-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:31:45.394994 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:45.395007 | orchestrator | 2026-02-17 06:31:45.395019 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:31:45.395032 | orchestrator | Tuesday 17 February 2026 06:31:45 +0000 (0:00:01.379) 0:45:00.412 ****** 2026-02-17 06:31:45.395044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:45.395080 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'uuids': ['4833064e-8ca1-479d-a0c0-581ea0d1065c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:45.395108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b093f3ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:45.395141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:45.395157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:45.395169 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:45.395189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:45.395207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:45.395226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV', 'dm-uuid-CRYPT-LUKS2-f004f31e7c734e098d3470dc55158438-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'uuids': ['f004f31e-7c73-4e09-8d34-70dc55158438'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718594 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718631 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37d8f58a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718647 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718670 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7', 'dm-uuid-CRYPT-LUKS2-4833064e8ca1479da0c0581ea0d1065c-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:31:50.718699 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:31:50.718738 | orchestrator | 2026-02-17 06:31:50.718751 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:31:50.718763 | orchestrator | Tuesday 17 February 2026 06:31:46 +0000 (0:00:01.427) 0:45:01.839 ****** 2026-02-17 06:31:50.718774 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:50.718785 | orchestrator | 2026-02-17 06:31:50.718795 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:31:50.718805 | orchestrator | Tuesday 17 February 2026 06:31:48 +0000 (0:00:01.575) 0:45:03.414 ****** 2026-02-17 06:31:50.718816 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:50.718826 | orchestrator | 2026-02-17 06:31:50.718836 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:31:50.718847 | orchestrator | Tuesday 17 February 2026 06:31:49 +0000 (0:00:01.115) 0:45:04.530 ****** 2026-02-17 06:31:50.718856 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:31:50.718867 | orchestrator | 2026-02-17 06:31:50.718877 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:31:50.718895 | orchestrator | Tuesday 17 February 2026 06:31:50 +0000 (0:00:01.448) 0:45:05.978 ****** 2026-02-17 06:32:34.359906 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360053 | orchestrator | 2026-02-17 06:32:34.360085 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:32:34.360102 | orchestrator | Tuesday 17 February 2026 06:31:51 +0000 (0:00:01.136) 0:45:07.115 ****** 2026-02-17 06:32:34.360114 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360125 | orchestrator | 2026-02-17 06:32:34.360137 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:32:34.360148 | orchestrator | Tuesday 17 February 2026 06:31:53 +0000 (0:00:01.272) 0:45:08.387 ****** 2026-02-17 06:32:34.360159 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360171 | orchestrator | 2026-02-17 06:32:34.360182 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:32:34.360193 | orchestrator | Tuesday 17 February 2026 06:31:54 +0000 (0:00:01.155) 0:45:09.543 ****** 2026-02-17 06:32:34.360205 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-17 06:32:34.360242 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-17 06:32:34.360253 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-17 06:32:34.360264 | orchestrator | 2026-02-17 06:32:34.360275 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:32:34.360287 | orchestrator | Tuesday 17 February 2026 06:31:56 +0000 (0:00:02.166) 0:45:11.710 ****** 2026-02-17 06:32:34.360298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 06:32:34.360340 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 06:32:34.360354 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 06:32:34.360365 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360376 | orchestrator | 2026-02-17 06:32:34.360388 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:32:34.360399 | orchestrator | Tuesday 17 February 2026 06:31:57 +0000 (0:00:01.172) 0:45:12.882 ****** 2026-02-17 06:32:34.360410 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-17 06:32:34.360422 | orchestrator | 2026-02-17 06:32:34.360436 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:32:34.360450 | orchestrator | Tuesday 17 February 2026 06:31:58 +0000 (0:00:01.184) 0:45:14.067 ****** 2026-02-17 06:32:34.360463 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360475 | orchestrator | 2026-02-17 06:32:34.360488 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:32:34.360500 | orchestrator | Tuesday 17 February 2026 06:31:59 +0000 (0:00:01.152) 0:45:15.220 ****** 2026-02-17 06:32:34.360513 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360525 | orchestrator | 2026-02-17 06:32:34.360537 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:32:34.360550 | orchestrator | Tuesday 17 February 2026 06:32:01 +0000 (0:00:01.138) 0:45:16.359 ****** 2026-02-17 06:32:34.360562 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360574 | orchestrator | 2026-02-17 06:32:34.360587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:32:34.360599 | orchestrator | Tuesday 17 February 2026 06:32:02 +0000 (0:00:01.169) 0:45:17.528 ****** 2026-02-17 06:32:34.360612 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:32:34.360624 | orchestrator | 2026-02-17 06:32:34.360637 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:32:34.360650 | orchestrator | Tuesday 17 February 2026 06:32:03 +0000 (0:00:01.224) 0:45:18.752 ****** 2026-02-17 06:32:34.360662 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:32:34.360674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:32:34.360687 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:32:34.360700 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360712 | orchestrator | 2026-02-17 06:32:34.360725 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:32:34.360737 | orchestrator | Tuesday 17 February 2026 06:32:04 +0000 (0:00:01.402) 0:45:20.155 ****** 2026-02-17 06:32:34.360749 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:32:34.360761 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:32:34.360773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:32:34.360785 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360798 | orchestrator | 2026-02-17 06:32:34.360826 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:32:34.360838 | orchestrator | Tuesday 17 February 2026 06:32:06 +0000 (0:00:01.376) 0:45:21.534 ****** 2026-02-17 06:32:34.360849 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:32:34.360859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:32:34.360878 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:32:34.360889 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.360900 | orchestrator | 2026-02-17 06:32:34.360911 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:32:34.360928 | orchestrator | Tuesday 17 February 2026 06:32:07 +0000 (0:00:01.380) 0:45:22.914 ****** 2026-02-17 06:32:34.360946 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:32:34.360963 | orchestrator | 2026-02-17 06:32:34.360980 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:32:34.360998 | orchestrator | Tuesday 17 February 2026 06:32:08 +0000 (0:00:01.216) 0:45:24.131 ****** 2026-02-17 06:32:34.361012 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 06:32:34.361022 | orchestrator | 2026-02-17 06:32:34.361033 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:32:34.361044 | orchestrator | Tuesday 17 February 2026 06:32:10 +0000 (0:00:01.741) 0:45:25.873 ****** 2026-02-17 06:32:34.361075 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:32:34.361087 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:32:34.361098 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:32:34.361109 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:32:34.361120 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:32:34.361130 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-17 06:32:34.361141 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:32:34.361152 | orchestrator | 2026-02-17 06:32:34.361163 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:32:34.361174 | orchestrator | Tuesday 17 February 2026 06:32:12 +0000 (0:00:02.245) 0:45:28.119 ****** 2026-02-17 06:32:34.361185 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:32:34.361196 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:32:34.361207 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:32:34.361217 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:32:34.361228 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:32:34.361239 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-17 06:32:34.361250 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:32:34.361261 | orchestrator | 2026-02-17 06:32:34.361272 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-17 06:32:34.361282 | orchestrator | Tuesday 17 February 2026 06:32:15 +0000 (0:00:02.713) 0:45:30.832 ****** 2026-02-17 06:32:34.361293 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:32:34.361304 | orchestrator | 2026-02-17 06:32:34.361346 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-17 06:32:34.361366 | orchestrator | Tuesday 17 February 2026 06:32:16 +0000 (0:00:01.105) 0:45:31.938 ****** 2026-02-17 06:32:34.361384 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:32:34.361403 | orchestrator | 2026-02-17 06:32:34.361414 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-17 06:32:34.361425 | orchestrator | Tuesday 17 February 2026 06:32:17 +0000 (0:00:00.803) 0:45:32.741 ****** 2026-02-17 06:32:34.361436 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:32:34.361453 | orchestrator | 2026-02-17 06:32:34.361470 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-17 06:32:34.361481 | orchestrator | Tuesday 17 February 2026 06:32:18 +0000 (0:00:01.014) 0:45:33.756 ****** 2026-02-17 06:32:34.361501 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-17 06:32:34.361512 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-17 06:32:34.361523 | orchestrator | 2026-02-17 06:32:34.361534 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:32:34.361545 | orchestrator | Tuesday 17 February 2026 06:32:22 +0000 (0:00:03.821) 0:45:37.578 ****** 2026-02-17 06:32:34.361556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-17 06:32:34.361567 | orchestrator | 2026-02-17 06:32:34.361578 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:32:34.361589 | orchestrator | Tuesday 17 February 2026 06:32:23 +0000 (0:00:01.094) 0:45:38.673 ****** 2026-02-17 06:32:34.361600 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-17 06:32:34.361611 | orchestrator | 2026-02-17 06:32:34.361622 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:32:34.361633 | orchestrator | Tuesday 17 February 2026 06:32:24 +0000 (0:00:01.133) 0:45:39.807 ****** 2026-02-17 06:32:34.361644 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.361655 | orchestrator | 2026-02-17 06:32:34.361666 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:32:34.361677 | orchestrator | Tuesday 17 February 2026 06:32:25 +0000 (0:00:01.199) 0:45:41.007 ****** 2026-02-17 06:32:34.361688 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:32:34.361699 | orchestrator | 2026-02-17 06:32:34.361717 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:32:34.361728 | orchestrator | Tuesday 17 February 2026 06:32:27 +0000 (0:00:01.545) 0:45:42.552 ****** 2026-02-17 06:32:34.361739 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:32:34.361750 | orchestrator | 2026-02-17 06:32:34.361761 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:32:34.361772 | orchestrator | Tuesday 17 February 2026 06:32:28 +0000 (0:00:01.550) 0:45:44.103 ****** 2026-02-17 06:32:34.361783 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:32:34.361794 | orchestrator | 2026-02-17 06:32:34.361805 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:32:34.361816 | orchestrator | Tuesday 17 February 2026 06:32:30 +0000 (0:00:02.048) 0:45:46.152 ****** 2026-02-17 06:32:34.361827 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.361838 | orchestrator | 2026-02-17 06:32:34.361849 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:32:34.361860 | orchestrator | Tuesday 17 February 2026 06:32:32 +0000 (0:00:01.146) 0:45:47.298 ****** 2026-02-17 06:32:34.361871 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.361882 | orchestrator | 2026-02-17 06:32:34.361893 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:32:34.361904 | orchestrator | Tuesday 17 February 2026 06:32:33 +0000 (0:00:01.175) 0:45:48.474 ****** 2026-02-17 06:32:34.361915 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:32:34.361926 | orchestrator | 2026-02-17 06:32:34.361945 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:33:15.937444 | orchestrator | Tuesday 17 February 2026 06:32:34 +0000 (0:00:01.141) 0:45:49.616 ****** 2026-02-17 06:33:15.937557 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.937575 | orchestrator | 2026-02-17 06:33:15.937589 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:33:15.937600 | orchestrator | Tuesday 17 February 2026 06:32:35 +0000 (0:00:01.559) 0:45:51.176 ****** 2026-02-17 06:33:15.937611 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.937622 | orchestrator | 2026-02-17 06:33:15.937633 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:33:15.937645 | orchestrator | Tuesday 17 February 2026 06:32:37 +0000 (0:00:01.646) 0:45:52.822 ****** 2026-02-17 06:33:15.937656 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.937667 | orchestrator | 2026-02-17 06:33:15.937679 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:33:15.937714 | orchestrator | Tuesday 17 February 2026 06:32:38 +0000 (0:00:00.829) 0:45:53.651 ****** 2026-02-17 06:33:15.937726 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.937737 | orchestrator | 2026-02-17 06:33:15.937748 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:33:15.937759 | orchestrator | Tuesday 17 February 2026 06:32:39 +0000 (0:00:00.821) 0:45:54.473 ****** 2026-02-17 06:33:15.937770 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.937781 | orchestrator | 2026-02-17 06:33:15.937792 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:33:15.937803 | orchestrator | Tuesday 17 February 2026 06:32:40 +0000 (0:00:00.811) 0:45:55.284 ****** 2026-02-17 06:33:15.937814 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.937825 | orchestrator | 2026-02-17 06:33:15.937836 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:33:15.937847 | orchestrator | Tuesday 17 February 2026 06:32:40 +0000 (0:00:00.820) 0:45:56.105 ****** 2026-02-17 06:33:15.937858 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.937869 | orchestrator | 2026-02-17 06:33:15.937880 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:33:15.937891 | orchestrator | Tuesday 17 February 2026 06:32:41 +0000 (0:00:00.787) 0:45:56.893 ****** 2026-02-17 06:33:15.937901 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.937912 | orchestrator | 2026-02-17 06:33:15.937923 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:33:15.937934 | orchestrator | Tuesday 17 February 2026 06:32:42 +0000 (0:00:00.773) 0:45:57.666 ****** 2026-02-17 06:33:15.937945 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.937960 | orchestrator | 2026-02-17 06:33:15.937981 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:33:15.938003 | orchestrator | Tuesday 17 February 2026 06:32:43 +0000 (0:00:00.837) 0:45:58.503 ****** 2026-02-17 06:33:15.938109 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938137 | orchestrator | 2026-02-17 06:33:15.938191 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:33:15.938207 | orchestrator | Tuesday 17 February 2026 06:32:44 +0000 (0:00:00.830) 0:45:59.333 ****** 2026-02-17 06:33:15.938219 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.938230 | orchestrator | 2026-02-17 06:33:15.938241 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:33:15.938252 | orchestrator | Tuesday 17 February 2026 06:32:44 +0000 (0:00:00.806) 0:46:00.140 ****** 2026-02-17 06:33:15.938263 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.938273 | orchestrator | 2026-02-17 06:33:15.938285 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:33:15.938296 | orchestrator | Tuesday 17 February 2026 06:32:45 +0000 (0:00:00.813) 0:46:00.953 ****** 2026-02-17 06:33:15.938306 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938317 | orchestrator | 2026-02-17 06:33:15.938328 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:33:15.938339 | orchestrator | Tuesday 17 February 2026 06:32:46 +0000 (0:00:00.862) 0:46:01.816 ****** 2026-02-17 06:33:15.938350 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938360 | orchestrator | 2026-02-17 06:33:15.938371 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:33:15.938382 | orchestrator | Tuesday 17 February 2026 06:32:47 +0000 (0:00:00.799) 0:46:02.616 ****** 2026-02-17 06:33:15.938393 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938403 | orchestrator | 2026-02-17 06:33:15.938415 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:33:15.938441 | orchestrator | Tuesday 17 February 2026 06:32:48 +0000 (0:00:00.851) 0:46:03.467 ****** 2026-02-17 06:33:15.938453 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938463 | orchestrator | 2026-02-17 06:33:15.938475 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:33:15.938496 | orchestrator | Tuesday 17 February 2026 06:32:48 +0000 (0:00:00.774) 0:46:04.241 ****** 2026-02-17 06:33:15.938508 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938518 | orchestrator | 2026-02-17 06:33:15.938529 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:33:15.938541 | orchestrator | Tuesday 17 February 2026 06:32:49 +0000 (0:00:00.772) 0:46:05.014 ****** 2026-02-17 06:33:15.938551 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938563 | orchestrator | 2026-02-17 06:33:15.938574 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:33:15.938585 | orchestrator | Tuesday 17 February 2026 06:32:50 +0000 (0:00:00.800) 0:46:05.815 ****** 2026-02-17 06:33:15.938596 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938607 | orchestrator | 2026-02-17 06:33:15.938618 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:33:15.938630 | orchestrator | Tuesday 17 February 2026 06:32:51 +0000 (0:00:00.795) 0:46:06.610 ****** 2026-02-17 06:33:15.938640 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938651 | orchestrator | 2026-02-17 06:33:15.938662 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:33:15.938673 | orchestrator | Tuesday 17 February 2026 06:32:52 +0000 (0:00:00.803) 0:46:07.414 ****** 2026-02-17 06:33:15.938703 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938715 | orchestrator | 2026-02-17 06:33:15.938726 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:33:15.938737 | orchestrator | Tuesday 17 February 2026 06:32:52 +0000 (0:00:00.811) 0:46:08.225 ****** 2026-02-17 06:33:15.938747 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938758 | orchestrator | 2026-02-17 06:33:15.938769 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:33:15.938780 | orchestrator | Tuesday 17 February 2026 06:32:53 +0000 (0:00:00.887) 0:46:09.113 ****** 2026-02-17 06:33:15.938791 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938801 | orchestrator | 2026-02-17 06:33:15.938812 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:33:15.938823 | orchestrator | Tuesday 17 February 2026 06:32:54 +0000 (0:00:00.782) 0:46:09.896 ****** 2026-02-17 06:33:15.938834 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.938845 | orchestrator | 2026-02-17 06:33:15.938856 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:33:15.938867 | orchestrator | Tuesday 17 February 2026 06:32:55 +0000 (0:00:00.895) 0:46:10.791 ****** 2026-02-17 06:33:15.938877 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.938888 | orchestrator | 2026-02-17 06:33:15.938899 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:33:15.938910 | orchestrator | Tuesday 17 February 2026 06:32:57 +0000 (0:00:01.575) 0:46:12.366 ****** 2026-02-17 06:33:15.938921 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.938932 | orchestrator | 2026-02-17 06:33:15.938943 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:33:15.938954 | orchestrator | Tuesday 17 February 2026 06:33:00 +0000 (0:00:02.940) 0:46:15.307 ****** 2026-02-17 06:33:15.938964 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-17 06:33:15.938976 | orchestrator | 2026-02-17 06:33:15.938987 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:33:15.938998 | orchestrator | Tuesday 17 February 2026 06:33:01 +0000 (0:00:01.120) 0:46:16.428 ****** 2026-02-17 06:33:15.939008 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.939019 | orchestrator | 2026-02-17 06:33:15.939030 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:33:15.939041 | orchestrator | Tuesday 17 February 2026 06:33:02 +0000 (0:00:01.162) 0:46:17.590 ****** 2026-02-17 06:33:15.939052 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.939069 | orchestrator | 2026-02-17 06:33:15.939080 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:33:15.939091 | orchestrator | Tuesday 17 February 2026 06:33:03 +0000 (0:00:01.153) 0:46:18.744 ****** 2026-02-17 06:33:15.939102 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:33:15.939113 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:33:15.939123 | orchestrator | 2026-02-17 06:33:15.939134 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:33:15.939145 | orchestrator | Tuesday 17 February 2026 06:33:05 +0000 (0:00:01.824) 0:46:20.569 ****** 2026-02-17 06:33:15.939171 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.939183 | orchestrator | 2026-02-17 06:33:15.939194 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:33:15.939205 | orchestrator | Tuesday 17 February 2026 06:33:06 +0000 (0:00:01.450) 0:46:22.019 ****** 2026-02-17 06:33:15.939216 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.939227 | orchestrator | 2026-02-17 06:33:15.939238 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:33:15.939249 | orchestrator | Tuesday 17 February 2026 06:33:07 +0000 (0:00:01.241) 0:46:23.261 ****** 2026-02-17 06:33:15.939260 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.939271 | orchestrator | 2026-02-17 06:33:15.939282 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:33:15.939292 | orchestrator | Tuesday 17 February 2026 06:33:08 +0000 (0:00:00.831) 0:46:24.092 ****** 2026-02-17 06:33:15.939303 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.939314 | orchestrator | 2026-02-17 06:33:15.939325 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:33:15.939336 | orchestrator | Tuesday 17 February 2026 06:33:09 +0000 (0:00:00.753) 0:46:24.846 ****** 2026-02-17 06:33:15.939353 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-17 06:33:15.939364 | orchestrator | 2026-02-17 06:33:15.939375 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:33:15.939387 | orchestrator | Tuesday 17 February 2026 06:33:10 +0000 (0:00:01.129) 0:46:25.976 ****** 2026-02-17 06:33:15.939397 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:15.939408 | orchestrator | 2026-02-17 06:33:15.939420 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:33:15.939430 | orchestrator | Tuesday 17 February 2026 06:33:12 +0000 (0:00:01.733) 0:46:27.710 ****** 2026-02-17 06:33:15.939441 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:33:15.939452 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:33:15.939463 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:33:15.939474 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.939485 | orchestrator | 2026-02-17 06:33:15.939496 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:33:15.939507 | orchestrator | Tuesday 17 February 2026 06:33:13 +0000 (0:00:01.167) 0:46:28.878 ****** 2026-02-17 06:33:15.939518 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:15.939529 | orchestrator | 2026-02-17 06:33:15.939540 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:33:15.939551 | orchestrator | Tuesday 17 February 2026 06:33:14 +0000 (0:00:01.153) 0:46:30.031 ****** 2026-02-17 06:33:15.939570 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062091 | orchestrator | 2026-02-17 06:33:59.062207 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:33:59.062226 | orchestrator | Tuesday 17 February 2026 06:33:15 +0000 (0:00:01.164) 0:46:31.195 ****** 2026-02-17 06:33:59.062240 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062282 | orchestrator | 2026-02-17 06:33:59.062297 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:33:59.062306 | orchestrator | Tuesday 17 February 2026 06:33:17 +0000 (0:00:01.158) 0:46:32.353 ****** 2026-02-17 06:33:59.062314 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062321 | orchestrator | 2026-02-17 06:33:59.062328 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:33:59.062336 | orchestrator | Tuesday 17 February 2026 06:33:18 +0000 (0:00:01.171) 0:46:33.525 ****** 2026-02-17 06:33:59.062343 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062350 | orchestrator | 2026-02-17 06:33:59.062358 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:33:59.062365 | orchestrator | Tuesday 17 February 2026 06:33:19 +0000 (0:00:00.809) 0:46:34.335 ****** 2026-02-17 06:33:59.062373 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:59.062382 | orchestrator | 2026-02-17 06:33:59.062389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:33:59.062397 | orchestrator | Tuesday 17 February 2026 06:33:21 +0000 (0:00:02.129) 0:46:36.465 ****** 2026-02-17 06:33:59.062405 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:59.062412 | orchestrator | 2026-02-17 06:33:59.062419 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:33:59.062427 | orchestrator | Tuesday 17 February 2026 06:33:22 +0000 (0:00:00.813) 0:46:37.278 ****** 2026-02-17 06:33:59.062434 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-17 06:33:59.062441 | orchestrator | 2026-02-17 06:33:59.062448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:33:59.062456 | orchestrator | Tuesday 17 February 2026 06:33:23 +0000 (0:00:01.264) 0:46:38.543 ****** 2026-02-17 06:33:59.062463 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062470 | orchestrator | 2026-02-17 06:33:59.062477 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:33:59.062485 | orchestrator | Tuesday 17 February 2026 06:33:24 +0000 (0:00:01.177) 0:46:39.721 ****** 2026-02-17 06:33:59.062492 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062499 | orchestrator | 2026-02-17 06:33:59.062506 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:33:59.062514 | orchestrator | Tuesday 17 February 2026 06:33:25 +0000 (0:00:01.218) 0:46:40.940 ****** 2026-02-17 06:33:59.062521 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062528 | orchestrator | 2026-02-17 06:33:59.062536 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:33:59.062543 | orchestrator | Tuesday 17 February 2026 06:33:26 +0000 (0:00:01.166) 0:46:42.106 ****** 2026-02-17 06:33:59.062550 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062557 | orchestrator | 2026-02-17 06:33:59.062565 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:33:59.062572 | orchestrator | Tuesday 17 February 2026 06:33:27 +0000 (0:00:01.162) 0:46:43.269 ****** 2026-02-17 06:33:59.062581 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062589 | orchestrator | 2026-02-17 06:33:59.062597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:33:59.062606 | orchestrator | Tuesday 17 February 2026 06:33:29 +0000 (0:00:01.177) 0:46:44.446 ****** 2026-02-17 06:33:59.062614 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062623 | orchestrator | 2026-02-17 06:33:59.062631 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:33:59.062640 | orchestrator | Tuesday 17 February 2026 06:33:30 +0000 (0:00:01.135) 0:46:45.582 ****** 2026-02-17 06:33:59.062649 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062657 | orchestrator | 2026-02-17 06:33:59.062666 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:33:59.062674 | orchestrator | Tuesday 17 February 2026 06:33:31 +0000 (0:00:01.133) 0:46:46.716 ****** 2026-02-17 06:33:59.062689 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.062697 | orchestrator | 2026-02-17 06:33:59.062706 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:33:59.062728 | orchestrator | Tuesday 17 February 2026 06:33:32 +0000 (0:00:01.162) 0:46:47.879 ****** 2026-02-17 06:33:59.062737 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:59.062745 | orchestrator | 2026-02-17 06:33:59.062754 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:33:59.062762 | orchestrator | Tuesday 17 February 2026 06:33:33 +0000 (0:00:00.805) 0:46:48.684 ****** 2026-02-17 06:33:59.062771 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-17 06:33:59.062780 | orchestrator | 2026-02-17 06:33:59.062789 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:33:59.062797 | orchestrator | Tuesday 17 February 2026 06:33:34 +0000 (0:00:01.126) 0:46:49.811 ****** 2026-02-17 06:33:59.062806 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-17 06:33:59.062815 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-17 06:33:59.062824 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-17 06:33:59.062832 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-17 06:33:59.062841 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-17 06:33:59.062850 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-17 06:33:59.062858 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-17 06:33:59.062866 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:33:59.062876 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:33:59.062900 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:33:59.062909 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:33:59.062917 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:33:59.062926 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:33:59.062935 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:33:59.062944 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-17 06:33:59.062953 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-17 06:33:59.062960 | orchestrator | 2026-02-17 06:33:59.062968 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:33:59.062975 | orchestrator | Tuesday 17 February 2026 06:33:40 +0000 (0:00:06.225) 0:46:56.036 ****** 2026-02-17 06:33:59.062982 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-17 06:33:59.062990 | orchestrator | 2026-02-17 06:33:59.062997 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 06:33:59.063004 | orchestrator | Tuesday 17 February 2026 06:33:41 +0000 (0:00:01.139) 0:46:57.176 ****** 2026-02-17 06:33:59.063045 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:33:59.063056 | orchestrator | 2026-02-17 06:33:59.063063 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 06:33:59.063070 | orchestrator | Tuesday 17 February 2026 06:33:43 +0000 (0:00:01.587) 0:46:58.764 ****** 2026-02-17 06:33:59.063077 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:33:59.063085 | orchestrator | 2026-02-17 06:33:59.063092 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:33:59.063099 | orchestrator | Tuesday 17 February 2026 06:33:45 +0000 (0:00:01.612) 0:47:00.377 ****** 2026-02-17 06:33:59.063107 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063114 | orchestrator | 2026-02-17 06:33:59.063121 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:33:59.063134 | orchestrator | Tuesday 17 February 2026 06:33:45 +0000 (0:00:00.764) 0:47:01.142 ****** 2026-02-17 06:33:59.063141 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063149 | orchestrator | 2026-02-17 06:33:59.063156 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:33:59.063163 | orchestrator | Tuesday 17 February 2026 06:33:46 +0000 (0:00:00.795) 0:47:01.937 ****** 2026-02-17 06:33:59.063171 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063178 | orchestrator | 2026-02-17 06:33:59.063185 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:33:59.063192 | orchestrator | Tuesday 17 February 2026 06:33:47 +0000 (0:00:00.789) 0:47:02.727 ****** 2026-02-17 06:33:59.063200 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063207 | orchestrator | 2026-02-17 06:33:59.063214 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:33:59.063222 | orchestrator | Tuesday 17 February 2026 06:33:48 +0000 (0:00:00.819) 0:47:03.546 ****** 2026-02-17 06:33:59.063229 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063236 | orchestrator | 2026-02-17 06:33:59.063243 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:33:59.063251 | orchestrator | Tuesday 17 February 2026 06:33:49 +0000 (0:00:00.792) 0:47:04.339 ****** 2026-02-17 06:33:59.063258 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063269 | orchestrator | 2026-02-17 06:33:59.063281 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:33:59.063294 | orchestrator | Tuesday 17 February 2026 06:33:49 +0000 (0:00:00.809) 0:47:05.149 ****** 2026-02-17 06:33:59.063305 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063317 | orchestrator | 2026-02-17 06:33:59.063328 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:33:59.063340 | orchestrator | Tuesday 17 February 2026 06:33:50 +0000 (0:00:00.913) 0:47:06.063 ****** 2026-02-17 06:33:59.063351 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063362 | orchestrator | 2026-02-17 06:33:59.063379 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:33:59.063390 | orchestrator | Tuesday 17 February 2026 06:33:51 +0000 (0:00:00.774) 0:47:06.837 ****** 2026-02-17 06:33:59.063402 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063414 | orchestrator | 2026-02-17 06:33:59.063426 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:33:59.063438 | orchestrator | Tuesday 17 February 2026 06:33:52 +0000 (0:00:00.813) 0:47:07.651 ****** 2026-02-17 06:33:59.063451 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:33:59.063463 | orchestrator | 2026-02-17 06:33:59.063474 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:33:59.063486 | orchestrator | Tuesday 17 February 2026 06:33:53 +0000 (0:00:00.797) 0:47:08.448 ****** 2026-02-17 06:33:59.063498 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:33:59.063511 | orchestrator | 2026-02-17 06:33:59.063523 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:33:59.063535 | orchestrator | Tuesday 17 February 2026 06:33:54 +0000 (0:00:00.854) 0:47:09.303 ****** 2026-02-17 06:33:59.063547 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:33:59.063560 | orchestrator | 2026-02-17 06:33:59.063572 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:33:59.063583 | orchestrator | Tuesday 17 February 2026 06:33:58 +0000 (0:00:04.130) 0:47:13.433 ****** 2026-02-17 06:33:59.063599 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:34:41.250961 | orchestrator | 2026-02-17 06:34:41.251075 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:34:41.251092 | orchestrator | Tuesday 17 February 2026 06:33:59 +0000 (0:00:00.887) 0:47:14.321 ****** 2026-02-17 06:34:41.251131 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-17 06:34:41.251148 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-17 06:34:41.251161 | orchestrator | 2026-02-17 06:34:41.251172 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:34:41.251184 | orchestrator | Tuesday 17 February 2026 06:34:06 +0000 (0:00:07.331) 0:47:21.652 ****** 2026-02-17 06:34:41.251195 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.251207 | orchestrator | 2026-02-17 06:34:41.251218 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:34:41.251229 | orchestrator | Tuesday 17 February 2026 06:34:07 +0000 (0:00:00.790) 0:47:22.443 ****** 2026-02-17 06:34:41.251240 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.251251 | orchestrator | 2026-02-17 06:34:41.251262 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:34:41.251275 | orchestrator | Tuesday 17 February 2026 06:34:07 +0000 (0:00:00.777) 0:47:23.220 ****** 2026-02-17 06:34:41.251286 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.251297 | orchestrator | 2026-02-17 06:34:41.251308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:34:41.251319 | orchestrator | Tuesday 17 February 2026 06:34:08 +0000 (0:00:00.814) 0:47:24.034 ****** 2026-02-17 06:34:41.251330 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.251341 | orchestrator | 2026-02-17 06:34:41.251356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:34:41.251373 | orchestrator | Tuesday 17 February 2026 06:34:09 +0000 (0:00:00.803) 0:47:24.838 ****** 2026-02-17 06:34:41.251397 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.251424 | orchestrator | 2026-02-17 06:34:41.251441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:34:41.251458 | orchestrator | Tuesday 17 February 2026 06:34:10 +0000 (0:00:00.814) 0:47:25.652 ****** 2026-02-17 06:34:41.251474 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:34:41.251492 | orchestrator | 2026-02-17 06:34:41.251510 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:34:41.251526 | orchestrator | Tuesday 17 February 2026 06:34:11 +0000 (0:00:00.891) 0:47:26.544 ****** 2026-02-17 06:34:41.251541 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:34:41.251557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:34:41.251573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:34:41.251589 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.251606 | orchestrator | 2026-02-17 06:34:41.251623 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:34:41.251641 | orchestrator | Tuesday 17 February 2026 06:34:12 +0000 (0:00:01.437) 0:47:27.982 ****** 2026-02-17 06:34:41.251660 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:34:41.251679 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:34:41.251698 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:34:41.251716 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.251734 | orchestrator | 2026-02-17 06:34:41.251753 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:34:41.251809 | orchestrator | Tuesday 17 February 2026 06:34:14 +0000 (0:00:01.494) 0:47:29.476 ****** 2026-02-17 06:34:41.251829 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:34:41.251847 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:34:41.251865 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:34:41.251910 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.251930 | orchestrator | 2026-02-17 06:34:41.251947 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:34:41.251964 | orchestrator | Tuesday 17 February 2026 06:34:15 +0000 (0:00:01.100) 0:47:30.576 ****** 2026-02-17 06:34:41.251982 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:34:41.252000 | orchestrator | 2026-02-17 06:34:41.252018 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:34:41.252036 | orchestrator | Tuesday 17 February 2026 06:34:16 +0000 (0:00:00.803) 0:47:31.380 ****** 2026-02-17 06:34:41.252048 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 06:34:41.252059 | orchestrator | 2026-02-17 06:34:41.252070 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:34:41.252080 | orchestrator | Tuesday 17 February 2026 06:34:17 +0000 (0:00:01.016) 0:47:32.397 ****** 2026-02-17 06:34:41.252091 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:34:41.252102 | orchestrator | 2026-02-17 06:34:41.252113 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-17 06:34:41.252124 | orchestrator | Tuesday 17 February 2026 06:34:18 +0000 (0:00:01.416) 0:47:33.813 ****** 2026-02-17 06:34:41.252134 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:34:41.252145 | orchestrator | 2026-02-17 06:34:41.252178 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-17 06:34:41.252189 | orchestrator | Tuesday 17 February 2026 06:34:19 +0000 (0:00:00.818) 0:47:34.632 ****** 2026-02-17 06:34:41.252200 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:34:41.252212 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:34:41.252223 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:34:41.252234 | orchestrator | 2026-02-17 06:34:41.252245 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-17 06:34:41.252255 | orchestrator | Tuesday 17 February 2026 06:34:21 +0000 (0:00:01.673) 0:47:36.306 ****** 2026-02-17 06:34:41.252266 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-17 06:34:41.252277 | orchestrator | 2026-02-17 06:34:41.252288 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-17 06:34:41.252299 | orchestrator | Tuesday 17 February 2026 06:34:22 +0000 (0:00:01.171) 0:47:37.478 ****** 2026-02-17 06:34:41.252309 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.252320 | orchestrator | 2026-02-17 06:34:41.252331 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-17 06:34:41.252342 | orchestrator | Tuesday 17 February 2026 06:34:23 +0000 (0:00:01.145) 0:47:38.624 ****** 2026-02-17 06:34:41.252352 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.252363 | orchestrator | 2026-02-17 06:34:41.252374 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-17 06:34:41.252385 | orchestrator | Tuesday 17 February 2026 06:34:24 +0000 (0:00:01.122) 0:47:39.747 ****** 2026-02-17 06:34:41.252396 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:34:41.252406 | orchestrator | 2026-02-17 06:34:41.252417 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-17 06:34:41.252428 | orchestrator | Tuesday 17 February 2026 06:34:25 +0000 (0:00:01.446) 0:47:41.193 ****** 2026-02-17 06:34:41.252439 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:34:41.252449 | orchestrator | 2026-02-17 06:34:41.252460 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-17 06:34:41.252481 | orchestrator | Tuesday 17 February 2026 06:34:27 +0000 (0:00:01.601) 0:47:42.794 ****** 2026-02-17 06:34:41.252492 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-17 06:34:41.252503 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-17 06:34:41.252514 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-17 06:34:41.252525 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-17 06:34:41.252536 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-17 06:34:41.252547 | orchestrator | 2026-02-17 06:34:41.252558 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-17 06:34:41.252569 | orchestrator | Tuesday 17 February 2026 06:34:30 +0000 (0:00:02.506) 0:47:45.301 ****** 2026-02-17 06:34:41.252579 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.252590 | orchestrator | 2026-02-17 06:34:41.252601 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-17 06:34:41.252612 | orchestrator | Tuesday 17 February 2026 06:34:30 +0000 (0:00:00.838) 0:47:46.139 ****** 2026-02-17 06:34:41.252622 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-17 06:34:41.252633 | orchestrator | 2026-02-17 06:34:41.252644 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-17 06:34:41.252655 | orchestrator | Tuesday 17 February 2026 06:34:32 +0000 (0:00:01.148) 0:47:47.288 ****** 2026-02-17 06:34:41.252666 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-17 06:34:41.252677 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-17 06:34:41.252687 | orchestrator | 2026-02-17 06:34:41.252698 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-17 06:34:41.252709 | orchestrator | Tuesday 17 February 2026 06:34:33 +0000 (0:00:01.803) 0:47:49.092 ****** 2026-02-17 06:34:41.252727 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:34:41.252738 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-17 06:34:41.252749 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:34:41.252760 | orchestrator | 2026-02-17 06:34:41.252771 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:34:41.252782 | orchestrator | Tuesday 17 February 2026 06:34:37 +0000 (0:00:03.186) 0:47:52.279 ****** 2026-02-17 06:34:41.252792 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-17 06:34:41.252803 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-17 06:34:41.252814 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:34:41.252825 | orchestrator | 2026-02-17 06:34:41.252836 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-17 06:34:41.252847 | orchestrator | Tuesday 17 February 2026 06:34:38 +0000 (0:00:01.657) 0:47:53.936 ****** 2026-02-17 06:34:41.252857 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.252868 | orchestrator | 2026-02-17 06:34:41.252926 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-17 06:34:41.252939 | orchestrator | Tuesday 17 February 2026 06:34:39 +0000 (0:00:00.946) 0:47:54.882 ****** 2026-02-17 06:34:41.252950 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.252961 | orchestrator | 2026-02-17 06:34:41.252972 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-17 06:34:41.252983 | orchestrator | Tuesday 17 February 2026 06:34:40 +0000 (0:00:00.816) 0:47:55.699 ****** 2026-02-17 06:34:41.252994 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:34:41.253005 | orchestrator | 2026-02-17 06:34:41.253022 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-17 06:37:01.564902 | orchestrator | Tuesday 17 February 2026 06:34:41 +0000 (0:00:00.809) 0:47:56.508 ****** 2026-02-17 06:37:01.565050 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-17 06:37:01.565114 | orchestrator | 2026-02-17 06:37:01.565135 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-17 06:37:01.565153 | orchestrator | Tuesday 17 February 2026 06:34:42 +0000 (0:00:01.161) 0:47:57.669 ****** 2026-02-17 06:37:01.565172 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:37:01.565193 | orchestrator | 2026-02-17 06:37:01.565213 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-17 06:37:01.565231 | orchestrator | Tuesday 17 February 2026 06:34:43 +0000 (0:00:01.537) 0:47:59.207 ****** 2026-02-17 06:37:01.565248 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:37:01.565266 | orchestrator | 2026-02-17 06:37:01.565286 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-17 06:37:01.565305 | orchestrator | Tuesday 17 February 2026 06:34:47 +0000 (0:00:03.361) 0:48:02.568 ****** 2026-02-17 06:37:01.565323 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-17 06:37:01.565342 | orchestrator | 2026-02-17 06:37:01.565361 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-17 06:37:01.565379 | orchestrator | Tuesday 17 February 2026 06:34:48 +0000 (0:00:01.183) 0:48:03.752 ****** 2026-02-17 06:37:01.565398 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:37:01.565418 | orchestrator | 2026-02-17 06:37:01.565439 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-17 06:37:01.565458 | orchestrator | Tuesday 17 February 2026 06:34:50 +0000 (0:00:01.988) 0:48:05.741 ****** 2026-02-17 06:37:01.565476 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:37:01.565494 | orchestrator | 2026-02-17 06:37:01.565542 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-17 06:37:01.565563 | orchestrator | Tuesday 17 February 2026 06:34:52 +0000 (0:00:01.936) 0:48:07.678 ****** 2026-02-17 06:37:01.565581 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:37:01.565600 | orchestrator | 2026-02-17 06:37:01.565618 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-17 06:37:01.565638 | orchestrator | Tuesday 17 February 2026 06:34:54 +0000 (0:00:02.283) 0:48:09.962 ****** 2026-02-17 06:37:01.565656 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:37:01.565676 | orchestrator | 2026-02-17 06:37:01.565696 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-17 06:37:01.565715 | orchestrator | Tuesday 17 February 2026 06:34:55 +0000 (0:00:01.156) 0:48:11.119 ****** 2026-02-17 06:37:01.565735 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:37:01.565760 | orchestrator | 2026-02-17 06:37:01.565780 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-17 06:37:01.565799 | orchestrator | Tuesday 17 February 2026 06:34:56 +0000 (0:00:01.109) 0:48:12.229 ****** 2026-02-17 06:37:01.565817 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-17 06:37:01.565835 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-17 06:37:01.565853 | orchestrator | 2026-02-17 06:37:01.565871 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-17 06:37:01.565889 | orchestrator | Tuesday 17 February 2026 06:34:58 +0000 (0:00:01.865) 0:48:14.094 ****** 2026-02-17 06:37:01.565908 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-17 06:37:01.565926 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-17 06:37:01.565944 | orchestrator | 2026-02-17 06:37:01.565963 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-17 06:37:01.565983 | orchestrator | Tuesday 17 February 2026 06:35:01 +0000 (0:00:02.877) 0:48:16.971 ****** 2026-02-17 06:37:01.566001 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-17 06:37:01.566098 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-17 06:37:01.566124 | orchestrator | 2026-02-17 06:37:01.566144 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-17 06:37:01.566162 | orchestrator | Tuesday 17 February 2026 06:35:06 +0000 (0:00:04.301) 0:48:21.273 ****** 2026-02-17 06:37:01.566181 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:37:01.566220 | orchestrator | 2026-02-17 06:37:01.566238 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-17 06:37:01.566257 | orchestrator | Tuesday 17 February 2026 06:35:06 +0000 (0:00:00.879) 0:48:22.152 ****** 2026-02-17 06:37:01.566317 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-17 06:37:01.566342 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:37:01.566363 | orchestrator | 2026-02-17 06:37:01.566382 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-17 06:37:01.566400 | orchestrator | Tuesday 17 February 2026 06:35:20 +0000 (0:00:13.309) 0:48:35.461 ****** 2026-02-17 06:37:01.566419 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:37:01.566437 | orchestrator | 2026-02-17 06:37:01.566458 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-17 06:37:01.566478 | orchestrator | Tuesday 17 February 2026 06:35:21 +0000 (0:00:00.915) 0:48:36.377 ****** 2026-02-17 06:37:01.566498 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:37:01.566558 | orchestrator | 2026-02-17 06:37:01.566581 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-17 06:37:01.566600 | orchestrator | Tuesday 17 February 2026 06:35:21 +0000 (0:00:00.791) 0:48:37.169 ****** 2026-02-17 06:37:01.566620 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:37:01.566642 | orchestrator | 2026-02-17 06:37:01.566662 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-17 06:37:01.566682 | orchestrator | Tuesday 17 February 2026 06:35:22 +0000 (0:00:00.768) 0:48:37.938 ****** 2026-02-17 06:37:01.566703 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-17 06:37:01.566724 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:37:01.566745 | orchestrator | 2026-02-17 06:37:01.566790 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-17 06:37:01.566811 | orchestrator | 2026-02-17 06:37:01.566831 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:37:01.566843 | orchestrator | Tuesday 17 February 2026 06:35:28 +0000 (0:00:05.514) 0:48:43.452 ****** 2026-02-17 06:37:01.566854 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:37:01.566865 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:37:01.566876 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:37:01.566886 | orchestrator | 2026-02-17 06:37:01.566897 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:37:01.566908 | orchestrator | Tuesday 17 February 2026 06:35:29 +0000 (0:00:01.665) 0:48:45.117 ****** 2026-02-17 06:37:01.566919 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:37:01.566930 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:37:01.566941 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:37:01.566951 | orchestrator | 2026-02-17 06:37:01.566962 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-17 06:37:01.566973 | orchestrator | Tuesday 17 February 2026 06:35:31 +0000 (0:00:01.753) 0:48:46.871 ****** 2026-02-17 06:37:01.566984 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-17 06:37:01.566995 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-17 06:37:01.567007 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-17 06:37:01.567018 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-17 06:37:01.567031 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-17 06:37:01.567042 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-17 06:37:01.567067 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-17 06:37:01.567078 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-17 06:37:01.567089 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-17 06:37:01.567108 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-17 06:37:01.567126 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-17 06:37:01.567144 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-17 06:37:01.567163 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-17 06:37:01.567183 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-17 06:37:01.567200 | orchestrator | 2026-02-17 06:37:01.567219 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-17 06:37:01.567236 | orchestrator | Tuesday 17 February 2026 06:36:44 +0000 (0:01:13.340) 0:50:00.212 ****** 2026-02-17 06:37:01.567254 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-17 06:37:01.567272 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-17 06:37:01.567291 | orchestrator | 2026-02-17 06:37:01.567310 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-17 06:37:01.567327 | orchestrator | Tuesday 17 February 2026 06:36:50 +0000 (0:00:05.549) 0:50:05.761 ****** 2026-02-17 06:37:01.567346 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:37:01.567364 | orchestrator | 2026-02-17 06:37:01.567382 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-17 06:37:01.567401 | orchestrator | 2026-02-17 06:37:01.567430 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:37:01.567448 | orchestrator | Tuesday 17 February 2026 06:36:53 +0000 (0:00:03.087) 0:50:08.849 ****** 2026-02-17 06:37:01.567467 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-17 06:37:01.567485 | orchestrator | 2026-02-17 06:37:01.567503 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:37:01.567588 | orchestrator | Tuesday 17 February 2026 06:36:54 +0000 (0:00:01.208) 0:50:10.058 ****** 2026-02-17 06:37:01.567608 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:01.567628 | orchestrator | 2026-02-17 06:37:01.567645 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:37:01.567665 | orchestrator | Tuesday 17 February 2026 06:36:56 +0000 (0:00:01.519) 0:50:11.578 ****** 2026-02-17 06:37:01.567683 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:01.567701 | orchestrator | 2026-02-17 06:37:01.567721 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:37:01.567739 | orchestrator | Tuesday 17 February 2026 06:36:57 +0000 (0:00:01.217) 0:50:12.795 ****** 2026-02-17 06:37:01.567757 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:01.567777 | orchestrator | 2026-02-17 06:37:01.567796 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:37:01.567817 | orchestrator | Tuesday 17 February 2026 06:36:59 +0000 (0:00:01.679) 0:50:14.475 ****** 2026-02-17 06:37:01.567836 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:01.567854 | orchestrator | 2026-02-17 06:37:01.567872 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:37:01.567892 | orchestrator | Tuesday 17 February 2026 06:37:00 +0000 (0:00:01.169) 0:50:15.644 ****** 2026-02-17 06:37:01.567925 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:28.302335 | orchestrator | 2026-02-17 06:37:28.302500 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:37:28.302521 | orchestrator | Tuesday 17 February 2026 06:37:01 +0000 (0:00:01.179) 0:50:16.823 ****** 2026-02-17 06:37:28.302560 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:28.302573 | orchestrator | 2026-02-17 06:37:28.302586 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:37:28.302597 | orchestrator | Tuesday 17 February 2026 06:37:02 +0000 (0:00:01.234) 0:50:18.058 ****** 2026-02-17 06:37:28.302609 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:28.302620 | orchestrator | 2026-02-17 06:37:28.302631 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:37:28.302643 | orchestrator | Tuesday 17 February 2026 06:37:03 +0000 (0:00:01.170) 0:50:19.228 ****** 2026-02-17 06:37:28.302653 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:28.302664 | orchestrator | 2026-02-17 06:37:28.302675 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:37:28.302686 | orchestrator | Tuesday 17 February 2026 06:37:05 +0000 (0:00:01.154) 0:50:20.383 ****** 2026-02-17 06:37:28.302697 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:37:28.302708 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:37:28.302719 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:37:28.302730 | orchestrator | 2026-02-17 06:37:28.302740 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:37:28.302751 | orchestrator | Tuesday 17 February 2026 06:37:06 +0000 (0:00:01.713) 0:50:22.096 ****** 2026-02-17 06:37:28.302762 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:28.302773 | orchestrator | 2026-02-17 06:37:28.302784 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:37:28.302795 | orchestrator | Tuesday 17 February 2026 06:37:08 +0000 (0:00:01.328) 0:50:23.425 ****** 2026-02-17 06:37:28.302806 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:37:28.302816 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:37:28.302827 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:37:28.302838 | orchestrator | 2026-02-17 06:37:28.302849 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:37:28.302860 | orchestrator | Tuesday 17 February 2026 06:37:11 +0000 (0:00:03.232) 0:50:26.658 ****** 2026-02-17 06:37:28.302871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 06:37:28.302882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 06:37:28.302893 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 06:37:28.302904 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:28.302915 | orchestrator | 2026-02-17 06:37:28.302926 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:37:28.302937 | orchestrator | Tuesday 17 February 2026 06:37:12 +0000 (0:00:01.445) 0:50:28.104 ****** 2026-02-17 06:37:28.302950 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:37:28.302965 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:37:28.302976 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:37:28.302987 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:28.302999 | orchestrator | 2026-02-17 06:37:28.303024 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:37:28.303045 | orchestrator | Tuesday 17 February 2026 06:37:14 +0000 (0:00:02.059) 0:50:30.164 ****** 2026-02-17 06:37:28.303058 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:28.303073 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:28.303102 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:28.303114 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:28.303125 | orchestrator | 2026-02-17 06:37:28.303136 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:37:28.303148 | orchestrator | Tuesday 17 February 2026 06:37:16 +0000 (0:00:01.246) 0:50:31.410 ****** 2026-02-17 06:37:28.303161 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:37:08.674537', 'end': '2026-02-17 06:37:08.730845', 'delta': '0:00:00.056308', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:37:28.303175 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:37:09.255009', 'end': '2026-02-17 06:37:09.306705', 'delta': '0:00:00.051696', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:37:28.303187 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:37:10.165022', 'end': '2026-02-17 06:37:10.207432', 'delta': '0:00:00.042410', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:37:28.303206 | orchestrator | 2026-02-17 06:37:28.303217 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:37:28.303228 | orchestrator | Tuesday 17 February 2026 06:37:17 +0000 (0:00:01.321) 0:50:32.732 ****** 2026-02-17 06:37:28.303239 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:28.303250 | orchestrator | 2026-02-17 06:37:28.303266 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:37:28.303278 | orchestrator | Tuesday 17 February 2026 06:37:19 +0000 (0:00:01.804) 0:50:34.537 ****** 2026-02-17 06:37:28.303289 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:28.303299 | orchestrator | 2026-02-17 06:37:28.303310 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:37:28.303321 | orchestrator | Tuesday 17 February 2026 06:37:20 +0000 (0:00:01.285) 0:50:35.822 ****** 2026-02-17 06:37:28.303332 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:28.303343 | orchestrator | 2026-02-17 06:37:28.303354 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:37:28.303365 | orchestrator | Tuesday 17 February 2026 06:37:21 +0000 (0:00:01.155) 0:50:36.978 ****** 2026-02-17 06:37:28.303376 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:28.303387 | orchestrator | 2026-02-17 06:37:28.303398 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:37:28.303409 | orchestrator | Tuesday 17 February 2026 06:37:24 +0000 (0:00:02.993) 0:50:39.971 ****** 2026-02-17 06:37:28.303420 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:28.303431 | orchestrator | 2026-02-17 06:37:28.303442 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:37:28.303471 | orchestrator | Tuesday 17 February 2026 06:37:25 +0000 (0:00:01.184) 0:50:41.155 ****** 2026-02-17 06:37:28.303482 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:28.303493 | orchestrator | 2026-02-17 06:37:28.303504 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:37:28.303516 | orchestrator | Tuesday 17 February 2026 06:37:27 +0000 (0:00:01.146) 0:50:42.302 ****** 2026-02-17 06:37:28.303533 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.620881 | orchestrator | 2026-02-17 06:37:39.621037 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:37:39.621059 | orchestrator | Tuesday 17 February 2026 06:37:28 +0000 (0:00:01.258) 0:50:43.561 ****** 2026-02-17 06:37:39.621073 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.621088 | orchestrator | 2026-02-17 06:37:39.621101 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:37:39.621115 | orchestrator | Tuesday 17 February 2026 06:37:29 +0000 (0:00:01.163) 0:50:44.725 ****** 2026-02-17 06:37:39.621129 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.621143 | orchestrator | 2026-02-17 06:37:39.621158 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:37:39.621187 | orchestrator | Tuesday 17 February 2026 06:37:30 +0000 (0:00:01.188) 0:50:45.913 ****** 2026-02-17 06:37:39.621214 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.621229 | orchestrator | 2026-02-17 06:37:39.621243 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:37:39.621258 | orchestrator | Tuesday 17 February 2026 06:37:31 +0000 (0:00:01.180) 0:50:47.094 ****** 2026-02-17 06:37:39.621273 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.621287 | orchestrator | 2026-02-17 06:37:39.621300 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:37:39.621315 | orchestrator | Tuesday 17 February 2026 06:37:33 +0000 (0:00:01.194) 0:50:48.288 ****** 2026-02-17 06:37:39.621330 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.621344 | orchestrator | 2026-02-17 06:37:39.621359 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:37:39.621373 | orchestrator | Tuesday 17 February 2026 06:37:34 +0000 (0:00:01.178) 0:50:49.466 ****** 2026-02-17 06:37:39.621415 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.621456 | orchestrator | 2026-02-17 06:37:39.621474 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:37:39.621490 | orchestrator | Tuesday 17 February 2026 06:37:35 +0000 (0:00:01.143) 0:50:50.610 ****** 2026-02-17 06:37:39.621506 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.621523 | orchestrator | 2026-02-17 06:37:39.621538 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:37:39.621555 | orchestrator | Tuesday 17 February 2026 06:37:36 +0000 (0:00:01.122) 0:50:51.734 ****** 2026-02-17 06:37:39.621573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:37:39.621592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:37:39.621610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:37:39.621644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:37:39.621664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:37:39.621703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:37:39.621718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:37:39.621737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69a38e66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:37:39.621773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:37:39.621787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:37:39.621800 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:39.621814 | orchestrator | 2026-02-17 06:37:39.621828 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:37:39.621840 | orchestrator | Tuesday 17 February 2026 06:37:38 +0000 (0:00:01.824) 0:50:53.558 ****** 2026-02-17 06:37:39.621866 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.769963 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770165 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770188 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770216 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770228 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770239 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770277 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69a38e66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1', 'scsi-SQEMU_QEMU_HARDDISK_69a38e66-d857-4b93-85c9-a75df11f4978-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770305 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770318 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:37:43.770330 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:37:43.770344 | orchestrator | 2026-02-17 06:37:43.770357 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:37:43.770369 | orchestrator | Tuesday 17 February 2026 06:37:39 +0000 (0:00:01.322) 0:50:54.881 ****** 2026-02-17 06:37:43.770381 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:43.770392 | orchestrator | 2026-02-17 06:37:43.770404 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:37:43.770415 | orchestrator | Tuesday 17 February 2026 06:37:41 +0000 (0:00:01.519) 0:50:56.401 ****** 2026-02-17 06:37:43.770476 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:43.770496 | orchestrator | 2026-02-17 06:37:43.770509 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:37:43.770523 | orchestrator | Tuesday 17 February 2026 06:37:42 +0000 (0:00:01.159) 0:50:57.560 ****** 2026-02-17 06:37:43.770535 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:37:43.770548 | orchestrator | 2026-02-17 06:37:43.770560 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:37:43.770581 | orchestrator | Tuesday 17 February 2026 06:37:43 +0000 (0:00:01.469) 0:50:59.030 ****** 2026-02-17 06:38:38.386766 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:38:38.386906 | orchestrator | 2026-02-17 06:38:38.386935 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:38:38.386958 | orchestrator | Tuesday 17 February 2026 06:37:44 +0000 (0:00:01.200) 0:51:00.230 ****** 2026-02-17 06:38:38.386977 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:38:38.386995 | orchestrator | 2026-02-17 06:38:38.387014 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:38:38.387034 | orchestrator | Tuesday 17 February 2026 06:37:46 +0000 (0:00:01.283) 0:51:01.514 ****** 2026-02-17 06:38:38.387052 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:38:38.387070 | orchestrator | 2026-02-17 06:38:38.387087 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:38:38.387105 | orchestrator | Tuesday 17 February 2026 06:37:47 +0000 (0:00:01.209) 0:51:02.723 ****** 2026-02-17 06:38:38.387125 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:38:38.387146 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-17 06:38:38.387165 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-17 06:38:38.387185 | orchestrator | 2026-02-17 06:38:38.387205 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:38:38.387224 | orchestrator | Tuesday 17 February 2026 06:37:49 +0000 (0:00:02.041) 0:51:04.764 ****** 2026-02-17 06:38:38.387245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-17 06:38:38.387265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-17 06:38:38.387286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-17 06:38:38.387309 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:38:38.387363 | orchestrator | 2026-02-17 06:38:38.387385 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:38:38.387405 | orchestrator | Tuesday 17 February 2026 06:37:50 +0000 (0:00:01.267) 0:51:06.032 ****** 2026-02-17 06:38:38.387424 | orchestrator | skipping: [testbed-node-0] 2026-02-17 06:38:38.387444 | orchestrator | 2026-02-17 06:38:38.387464 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:38:38.387483 | orchestrator | Tuesday 17 February 2026 06:37:51 +0000 (0:00:01.120) 0:51:07.153 ****** 2026-02-17 06:38:38.387504 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:38:38.387523 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:38:38.387544 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:38:38.387564 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:38:38.387583 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:38:38.387603 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:38:38.387624 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:38:38.387643 | orchestrator | 2026-02-17 06:38:38.387661 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:38:38.387680 | orchestrator | Tuesday 17 February 2026 06:37:54 +0000 (0:00:02.184) 0:51:09.337 ****** 2026-02-17 06:38:38.387718 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-17 06:38:38.387751 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:38:38.387802 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:38:38.387839 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:38:38.387858 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:38:38.387875 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:38:38.387893 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:38:38.387910 | orchestrator | 2026-02-17 06:38:38.387928 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-17 06:38:38.387947 | orchestrator | Tuesday 17 February 2026 06:37:57 +0000 (0:00:03.146) 0:51:12.484 ****** 2026-02-17 06:38:38.387967 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:38:38.387985 | orchestrator | 2026-02-17 06:38:38.388004 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-17 06:38:38.388022 | orchestrator | Tuesday 17 February 2026 06:38:00 +0000 (0:00:03.240) 0:51:15.725 ****** 2026-02-17 06:38:38.388040 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:38:38.388059 | orchestrator | 2026-02-17 06:38:38.388078 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-17 06:38:38.388097 | orchestrator | Tuesday 17 February 2026 06:38:03 +0000 (0:00:02.947) 0:51:18.673 ****** 2026-02-17 06:38:38.388115 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:38:38.388131 | orchestrator | 2026-02-17 06:38:38.388142 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-17 06:38:38.388153 | orchestrator | Tuesday 17 February 2026 06:38:05 +0000 (0:00:02.113) 0:51:20.787 ****** 2026-02-17 06:38:38.388196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4685', 'value': {'gid': 4685, 'name': 'testbed-node-3', 'rank': 0, 'incarnation': 7, 'state': 'up:active', 'state_seq': 1200, 'addr': '192.168.16.13:6817/2860474164', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.13:6816', 'nonce': 2860474164}, {'type': 'v1', 'addr': '192.168.16.13:6817', 'nonce': 2860474164}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-17 06:38:38.388211 | orchestrator | 2026-02-17 06:38:38.388222 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-17 06:38:38.388233 | orchestrator | Tuesday 17 February 2026 06:38:06 +0000 (0:00:01.238) 0:51:22.025 ****** 2026-02-17 06:38:38.388244 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-3) 2026-02-17 06:38:38.388255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-17 06:38:38.388266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-17 06:38:38.388277 | orchestrator | 2026-02-17 06:38:38.388288 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-17 06:38:38.388299 | orchestrator | Tuesday 17 February 2026 06:38:08 +0000 (0:00:01.549) 0:51:23.574 ****** 2026-02-17 06:38:38.388309 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-02-17 06:38:38.388380 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-02-17 06:38:38.388392 | orchestrator | 2026-02-17 06:38:38.388403 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-17 06:38:38.388414 | orchestrator | Tuesday 17 February 2026 06:38:09 +0000 (0:00:01.503) 0:51:25.078 ****** 2026-02-17 06:38:38.388425 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:38:38.388451 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:38:38.388462 | orchestrator | 2026-02-17 06:38:38.388473 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-17 06:38:38.388484 | orchestrator | Tuesday 17 February 2026 06:38:19 +0000 (0:00:09.488) 0:51:34.566 ****** 2026-02-17 06:38:38.388494 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:38:38.388505 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:38:38.388516 | orchestrator | 2026-02-17 06:38:38.388527 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-17 06:38:38.388538 | orchestrator | Tuesday 17 February 2026 06:38:23 +0000 (0:00:03.769) 0:51:38.336 ****** 2026-02-17 06:38:38.388549 | orchestrator | ok: [testbed-node-0] 2026-02-17 06:38:38.388561 | orchestrator | 2026-02-17 06:38:38.388571 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-17 06:38:38.388582 | orchestrator | Tuesday 17 February 2026 06:38:25 +0000 (0:00:02.150) 0:51:40.487 ****** 2026-02-17 06:38:38.388593 | orchestrator | changed: [testbed-node-0] 2026-02-17 06:38:38.388604 | orchestrator | 2026-02-17 06:38:38.388615 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-17 06:38:38.388626 | orchestrator | 2026-02-17 06:38:38.388637 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:38:38.388648 | orchestrator | Tuesday 17 February 2026 06:38:27 +0000 (0:00:01.828) 0:51:42.315 ****** 2026-02-17 06:38:38.388659 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-17 06:38:38.388669 | orchestrator | 2026-02-17 06:38:38.388680 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:38:38.388699 | orchestrator | Tuesday 17 February 2026 06:38:28 +0000 (0:00:01.359) 0:51:43.675 ****** 2026-02-17 06:38:38.388710 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:38:38.388721 | orchestrator | 2026-02-17 06:38:38.388732 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:38:38.388743 | orchestrator | Tuesday 17 February 2026 06:38:29 +0000 (0:00:01.462) 0:51:45.138 ****** 2026-02-17 06:38:38.388753 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:38:38.388764 | orchestrator | 2026-02-17 06:38:38.388775 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:38:38.388786 | orchestrator | Tuesday 17 February 2026 06:38:31 +0000 (0:00:01.238) 0:51:46.377 ****** 2026-02-17 06:38:38.388797 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:38:38.388808 | orchestrator | 2026-02-17 06:38:38.388819 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:38:38.388830 | orchestrator | Tuesday 17 February 2026 06:38:32 +0000 (0:00:01.520) 0:51:47.898 ****** 2026-02-17 06:38:38.388840 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:38:38.388851 | orchestrator | 2026-02-17 06:38:38.388862 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:38:38.388873 | orchestrator | Tuesday 17 February 2026 06:38:33 +0000 (0:00:01.133) 0:51:49.031 ****** 2026-02-17 06:38:38.388884 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:38:38.388895 | orchestrator | 2026-02-17 06:38:38.388906 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:38:38.388917 | orchestrator | Tuesday 17 February 2026 06:38:34 +0000 (0:00:01.106) 0:51:50.138 ****** 2026-02-17 06:38:38.388928 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:38:38.388938 | orchestrator | 2026-02-17 06:38:38.388949 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:38:38.388960 | orchestrator | Tuesday 17 February 2026 06:38:36 +0000 (0:00:01.187) 0:51:51.325 ****** 2026-02-17 06:38:38.388971 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:38:38.388982 | orchestrator | 2026-02-17 06:38:38.388993 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:38:38.389011 | orchestrator | Tuesday 17 February 2026 06:38:37 +0000 (0:00:01.145) 0:51:52.471 ****** 2026-02-17 06:38:38.389022 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:38:38.389033 | orchestrator | 2026-02-17 06:38:38.389052 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:39:04.366864 | orchestrator | Tuesday 17 February 2026 06:38:38 +0000 (0:00:01.173) 0:51:53.644 ****** 2026-02-17 06:39:04.367002 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:39:04.367029 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:39:04.367049 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:39:04.367069 | orchestrator | 2026-02-17 06:39:04.367089 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:39:04.367109 | orchestrator | Tuesday 17 February 2026 06:38:40 +0000 (0:00:02.080) 0:51:55.725 ****** 2026-02-17 06:39:04.367128 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:04.367148 | orchestrator | 2026-02-17 06:39:04.367168 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:39:04.367187 | orchestrator | Tuesday 17 February 2026 06:38:41 +0000 (0:00:01.237) 0:51:56.963 ****** 2026-02-17 06:39:04.367206 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:39:04.367225 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:39:04.367242 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:39:04.367260 | orchestrator | 2026-02-17 06:39:04.367306 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:39:04.367324 | orchestrator | Tuesday 17 February 2026 06:38:44 +0000 (0:00:03.257) 0:52:00.220 ****** 2026-02-17 06:39:04.367343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 06:39:04.367362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 06:39:04.367381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 06:39:04.367401 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:04.367421 | orchestrator | 2026-02-17 06:39:04.367442 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:39:04.367461 | orchestrator | Tuesday 17 February 2026 06:38:46 +0000 (0:00:01.839) 0:52:02.060 ****** 2026-02-17 06:39:04.367483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:39:04.367506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:39:04.367526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:39:04.367545 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:04.367567 | orchestrator | 2026-02-17 06:39:04.367586 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:39:04.367605 | orchestrator | Tuesday 17 February 2026 06:38:49 +0000 (0:00:02.223) 0:52:04.284 ****** 2026-02-17 06:39:04.367648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:04.367701 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:04.367722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:04.367736 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:04.367748 | orchestrator | 2026-02-17 06:39:04.367759 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:39:04.367770 | orchestrator | Tuesday 17 February 2026 06:38:50 +0000 (0:00:01.235) 0:52:05.519 ****** 2026-02-17 06:39:04.367804 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:38:42.222078', 'end': '2026-02-17 06:38:42.267621', 'delta': '0:00:00.045543', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:39:04.367819 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:38:43.198446', 'end': '2026-02-17 06:38:43.239828', 'delta': '0:00:00.041382', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:39:04.367832 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:38:43.746870', 'end': '2026-02-17 06:38:43.789964', 'delta': '0:00:00.043094', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:39:04.367845 | orchestrator | 2026-02-17 06:39:04.367865 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:39:04.367883 | orchestrator | Tuesday 17 February 2026 06:38:51 +0000 (0:00:01.235) 0:52:06.755 ****** 2026-02-17 06:39:04.367901 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:04.367919 | orchestrator | 2026-02-17 06:39:04.367935 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:39:04.367951 | orchestrator | Tuesday 17 February 2026 06:38:52 +0000 (0:00:01.326) 0:52:08.082 ****** 2026-02-17 06:39:04.367983 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:04.368002 | orchestrator | 2026-02-17 06:39:04.368031 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:39:04.368044 | orchestrator | Tuesday 17 February 2026 06:38:54 +0000 (0:00:01.249) 0:52:09.331 ****** 2026-02-17 06:39:04.368055 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:04.368065 | orchestrator | 2026-02-17 06:39:04.368077 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:39:04.368088 | orchestrator | Tuesday 17 February 2026 06:38:55 +0000 (0:00:01.165) 0:52:10.496 ****** 2026-02-17 06:39:04.368099 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:39:04.368109 | orchestrator | 2026-02-17 06:39:04.368120 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:39:04.368131 | orchestrator | Tuesday 17 February 2026 06:38:57 +0000 (0:00:02.002) 0:52:12.499 ****** 2026-02-17 06:39:04.368142 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:04.368153 | orchestrator | 2026-02-17 06:39:04.368164 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:39:04.368175 | orchestrator | Tuesday 17 February 2026 06:38:58 +0000 (0:00:01.164) 0:52:13.664 ****** 2026-02-17 06:39:04.368186 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:04.368196 | orchestrator | 2026-02-17 06:39:04.368207 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:39:04.368218 | orchestrator | Tuesday 17 February 2026 06:38:59 +0000 (0:00:01.142) 0:52:14.807 ****** 2026-02-17 06:39:04.368229 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:04.368240 | orchestrator | 2026-02-17 06:39:04.368251 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:39:04.368262 | orchestrator | Tuesday 17 February 2026 06:39:00 +0000 (0:00:01.331) 0:52:16.138 ****** 2026-02-17 06:39:04.368314 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:04.368325 | orchestrator | 2026-02-17 06:39:04.368336 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:39:04.368347 | orchestrator | Tuesday 17 February 2026 06:39:02 +0000 (0:00:01.152) 0:52:17.291 ****** 2026-02-17 06:39:04.368358 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:04.368369 | orchestrator | 2026-02-17 06:39:04.368380 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:39:04.368391 | orchestrator | Tuesday 17 February 2026 06:39:03 +0000 (0:00:01.179) 0:52:18.471 ****** 2026-02-17 06:39:04.368412 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:09.423880 | orchestrator | 2026-02-17 06:39:09.424005 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:39:09.424027 | orchestrator | Tuesday 17 February 2026 06:39:04 +0000 (0:00:01.156) 0:52:19.627 ****** 2026-02-17 06:39:09.424044 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:09.424061 | orchestrator | 2026-02-17 06:39:09.424077 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:39:09.424093 | orchestrator | Tuesday 17 February 2026 06:39:05 +0000 (0:00:01.246) 0:52:20.874 ****** 2026-02-17 06:39:09.424108 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:09.424124 | orchestrator | 2026-02-17 06:39:09.424140 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:39:09.424154 | orchestrator | Tuesday 17 February 2026 06:39:06 +0000 (0:00:01.170) 0:52:22.044 ****** 2026-02-17 06:39:09.424171 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:09.424189 | orchestrator | 2026-02-17 06:39:09.424204 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:39:09.424220 | orchestrator | Tuesday 17 February 2026 06:39:07 +0000 (0:00:01.118) 0:52:23.163 ****** 2026-02-17 06:39:09.424235 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:09.424250 | orchestrator | 2026-02-17 06:39:09.424342 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:39:09.424390 | orchestrator | Tuesday 17 February 2026 06:39:09 +0000 (0:00:01.291) 0:52:24.455 ****** 2026-02-17 06:39:09.424413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:39:09.424435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'uuids': ['b2ca6990-5b39-46e1-9ab9-fa89aec205ee'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP']}})  2026-02-17 06:39:09.424474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce83e4f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:39:09.424495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f']}})  2026-02-17 06:39:09.424540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:39:09.424587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:39:09.424608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:39:09.424640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:39:09.424660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac', 'dm-uuid-CRYPT-LUKS2-edb3e2e5a632414f8a4f0db6f2dd266c-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:39:09.424678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:39:09.424706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'uuids': ['edb3e2e5-a632-414f-8a4f-0db6f2dd266c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac']}})  2026-02-17 06:39:09.424724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3']}})  2026-02-17 06:39:09.424775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:39:10.808973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d567a40', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:39:10.809116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:39:10.809141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:39:10.809158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP', 'dm-uuid-CRYPT-LUKS2-b2ca69905b3946e19ab9fa89aec205ee-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:39:10.809172 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:10.809186 | orchestrator | 2026-02-17 06:39:10.809198 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:39:10.809212 | orchestrator | Tuesday 17 February 2026 06:39:10 +0000 (0:00:01.401) 0:52:25.856 ****** 2026-02-17 06:39:10.809246 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:10.809334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'uuids': ['b2ca6990-5b39-46e1-9ab9-fa89aec205ee'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:10.809348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce83e4f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:10.809369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:10.809383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:10.809403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac', 'dm-uuid-CRYPT-LUKS2-edb3e2e5a632414f8a4f0db6f2dd266c-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'uuids': ['edb3e2e5-a632-414f-8a4f-0db6f2dd266c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d567a40', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:11.981970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:47.522260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP', 'dm-uuid-CRYPT-LUKS2-b2ca69905b3946e19ab9fa89aec205ee-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:39:47.522377 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.522395 | orchestrator | 2026-02-17 06:39:47.522407 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:39:47.522420 | orchestrator | Tuesday 17 February 2026 06:39:11 +0000 (0:00:01.380) 0:52:27.236 ****** 2026-02-17 06:39:47.522431 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:47.522443 | orchestrator | 2026-02-17 06:39:47.522454 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:39:47.522465 | orchestrator | Tuesday 17 February 2026 06:39:13 +0000 (0:00:01.605) 0:52:28.842 ****** 2026-02-17 06:39:47.522476 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:47.522487 | orchestrator | 2026-02-17 06:39:47.522498 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:39:47.522509 | orchestrator | Tuesday 17 February 2026 06:39:14 +0000 (0:00:01.183) 0:52:30.025 ****** 2026-02-17 06:39:47.522520 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:47.522530 | orchestrator | 2026-02-17 06:39:47.522541 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:39:47.522552 | orchestrator | Tuesday 17 February 2026 06:39:16 +0000 (0:00:01.479) 0:52:31.505 ****** 2026-02-17 06:39:47.522563 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.522574 | orchestrator | 2026-02-17 06:39:47.522601 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:39:47.522612 | orchestrator | Tuesday 17 February 2026 06:39:17 +0000 (0:00:01.150) 0:52:32.655 ****** 2026-02-17 06:39:47.522623 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.522634 | orchestrator | 2026-02-17 06:39:47.522645 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:39:47.522656 | orchestrator | Tuesday 17 February 2026 06:39:18 +0000 (0:00:01.250) 0:52:33.906 ****** 2026-02-17 06:39:47.522667 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.522678 | orchestrator | 2026-02-17 06:39:47.522689 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:39:47.522700 | orchestrator | Tuesday 17 February 2026 06:39:19 +0000 (0:00:01.188) 0:52:35.094 ****** 2026-02-17 06:39:47.522734 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-17 06:39:47.522745 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-17 06:39:47.522756 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-17 06:39:47.522766 | orchestrator | 2026-02-17 06:39:47.522777 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:39:47.522788 | orchestrator | Tuesday 17 February 2026 06:39:21 +0000 (0:00:02.125) 0:52:37.219 ****** 2026-02-17 06:39:47.522799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 06:39:47.522809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 06:39:47.522820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 06:39:47.522831 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.522842 | orchestrator | 2026-02-17 06:39:47.522857 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:39:47.522875 | orchestrator | Tuesday 17 February 2026 06:39:23 +0000 (0:00:01.291) 0:52:38.511 ****** 2026-02-17 06:39:47.522893 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-17 06:39:47.522912 | orchestrator | 2026-02-17 06:39:47.522931 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:39:47.522951 | orchestrator | Tuesday 17 February 2026 06:39:24 +0000 (0:00:01.153) 0:52:39.665 ****** 2026-02-17 06:39:47.522968 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.522980 | orchestrator | 2026-02-17 06:39:47.522990 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:39:47.523001 | orchestrator | Tuesday 17 February 2026 06:39:25 +0000 (0:00:01.134) 0:52:40.799 ****** 2026-02-17 06:39:47.523012 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.523023 | orchestrator | 2026-02-17 06:39:47.523033 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:39:47.523045 | orchestrator | Tuesday 17 February 2026 06:39:26 +0000 (0:00:01.146) 0:52:41.945 ****** 2026-02-17 06:39:47.523055 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.523066 | orchestrator | 2026-02-17 06:39:47.523077 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:39:47.523088 | orchestrator | Tuesday 17 February 2026 06:39:27 +0000 (0:00:01.149) 0:52:43.095 ****** 2026-02-17 06:39:47.523098 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:47.523109 | orchestrator | 2026-02-17 06:39:47.523119 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:39:47.523130 | orchestrator | Tuesday 17 February 2026 06:39:29 +0000 (0:00:01.227) 0:52:44.322 ****** 2026-02-17 06:39:47.523141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:39:47.523169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:39:47.523181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:39:47.523216 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.523228 | orchestrator | 2026-02-17 06:39:47.523239 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:39:47.523250 | orchestrator | Tuesday 17 February 2026 06:39:30 +0000 (0:00:01.515) 0:52:45.837 ****** 2026-02-17 06:39:47.523261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:39:47.523272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:39:47.523283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:39:47.523294 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.523305 | orchestrator | 2026-02-17 06:39:47.523316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:39:47.523327 | orchestrator | Tuesday 17 February 2026 06:39:32 +0000 (0:00:01.440) 0:52:47.278 ****** 2026-02-17 06:39:47.523338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:39:47.523349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:39:47.523369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:39:47.523380 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.523390 | orchestrator | 2026-02-17 06:39:47.523401 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:39:47.523412 | orchestrator | Tuesday 17 February 2026 06:39:33 +0000 (0:00:01.470) 0:52:48.748 ****** 2026-02-17 06:39:47.523423 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:47.523434 | orchestrator | 2026-02-17 06:39:47.523445 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:39:47.523456 | orchestrator | Tuesday 17 February 2026 06:39:34 +0000 (0:00:01.179) 0:52:49.928 ****** 2026-02-17 06:39:47.523467 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 06:39:47.523478 | orchestrator | 2026-02-17 06:39:47.523489 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:39:47.523500 | orchestrator | Tuesday 17 February 2026 06:39:36 +0000 (0:00:01.362) 0:52:51.291 ****** 2026-02-17 06:39:47.523511 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:39:47.523528 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:39:47.523539 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:39:47.523550 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 06:39:47.523561 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:39:47.523572 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:39:47.523583 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:39:47.523594 | orchestrator | 2026-02-17 06:39:47.523604 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:39:47.523616 | orchestrator | Tuesday 17 February 2026 06:39:38 +0000 (0:00:02.210) 0:52:53.501 ****** 2026-02-17 06:39:47.523626 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:39:47.523637 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:39:47.523648 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:39:47.523659 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 06:39:47.523670 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:39:47.523680 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:39:47.523691 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:39:47.523702 | orchestrator | 2026-02-17 06:39:47.523713 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-17 06:39:47.523724 | orchestrator | Tuesday 17 February 2026 06:39:41 +0000 (0:00:03.064) 0:52:56.566 ****** 2026-02-17 06:39:47.523735 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.523745 | orchestrator | 2026-02-17 06:39:47.523756 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:39:47.523767 | orchestrator | Tuesday 17 February 2026 06:39:42 +0000 (0:00:01.139) 0:52:57.706 ****** 2026-02-17 06:39:47.523778 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-17 06:39:47.523789 | orchestrator | 2026-02-17 06:39:47.523801 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:39:47.523812 | orchestrator | Tuesday 17 February 2026 06:39:43 +0000 (0:00:01.147) 0:52:58.853 ****** 2026-02-17 06:39:47.523823 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-17 06:39:47.523833 | orchestrator | 2026-02-17 06:39:47.523845 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:39:47.523862 | orchestrator | Tuesday 17 February 2026 06:39:44 +0000 (0:00:01.142) 0:52:59.996 ****** 2026-02-17 06:39:47.523873 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:39:47.523884 | orchestrator | 2026-02-17 06:39:47.523895 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:39:47.523906 | orchestrator | Tuesday 17 February 2026 06:39:45 +0000 (0:00:01.140) 0:53:01.137 ****** 2026-02-17 06:39:47.523917 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:39:47.523927 | orchestrator | 2026-02-17 06:39:47.523939 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:39:47.523956 | orchestrator | Tuesday 17 February 2026 06:39:47 +0000 (0:00:01.642) 0:53:02.779 ****** 2026-02-17 06:40:39.302833 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.302946 | orchestrator | 2026-02-17 06:40:39.302962 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:40:39.302975 | orchestrator | Tuesday 17 February 2026 06:39:49 +0000 (0:00:01.612) 0:53:04.392 ****** 2026-02-17 06:40:39.302987 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.302998 | orchestrator | 2026-02-17 06:40:39.303009 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:40:39.303021 | orchestrator | Tuesday 17 February 2026 06:39:50 +0000 (0:00:01.572) 0:53:05.964 ****** 2026-02-17 06:40:39.303032 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303044 | orchestrator | 2026-02-17 06:40:39.303055 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:40:39.303067 | orchestrator | Tuesday 17 February 2026 06:39:51 +0000 (0:00:01.118) 0:53:07.083 ****** 2026-02-17 06:40:39.303077 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303088 | orchestrator | 2026-02-17 06:40:39.303100 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:40:39.303156 | orchestrator | Tuesday 17 February 2026 06:39:52 +0000 (0:00:01.120) 0:53:08.204 ****** 2026-02-17 06:40:39.303167 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303178 | orchestrator | 2026-02-17 06:40:39.303189 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:40:39.303201 | orchestrator | Tuesday 17 February 2026 06:39:54 +0000 (0:00:01.109) 0:53:09.313 ****** 2026-02-17 06:40:39.303212 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.303223 | orchestrator | 2026-02-17 06:40:39.303234 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:40:39.303245 | orchestrator | Tuesday 17 February 2026 06:39:56 +0000 (0:00:02.054) 0:53:11.367 ****** 2026-02-17 06:40:39.303257 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.303269 | orchestrator | 2026-02-17 06:40:39.303280 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:40:39.303291 | orchestrator | Tuesday 17 February 2026 06:39:57 +0000 (0:00:01.530) 0:53:12.898 ****** 2026-02-17 06:40:39.303302 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303314 | orchestrator | 2026-02-17 06:40:39.303325 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:40:39.303336 | orchestrator | Tuesday 17 February 2026 06:39:58 +0000 (0:00:01.178) 0:53:14.077 ****** 2026-02-17 06:40:39.303347 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303358 | orchestrator | 2026-02-17 06:40:39.303370 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:40:39.303383 | orchestrator | Tuesday 17 February 2026 06:39:59 +0000 (0:00:01.114) 0:53:15.191 ****** 2026-02-17 06:40:39.303396 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.303409 | orchestrator | 2026-02-17 06:40:39.303422 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:40:39.303434 | orchestrator | Tuesday 17 February 2026 06:40:01 +0000 (0:00:01.195) 0:53:16.387 ****** 2026-02-17 06:40:39.303446 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.303459 | orchestrator | 2026-02-17 06:40:39.303472 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:40:39.303509 | orchestrator | Tuesday 17 February 2026 06:40:02 +0000 (0:00:01.180) 0:53:17.567 ****** 2026-02-17 06:40:39.303522 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.303534 | orchestrator | 2026-02-17 06:40:39.303547 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:40:39.303559 | orchestrator | Tuesday 17 February 2026 06:40:03 +0000 (0:00:01.194) 0:53:18.762 ****** 2026-02-17 06:40:39.303571 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303584 | orchestrator | 2026-02-17 06:40:39.303595 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:40:39.303606 | orchestrator | Tuesday 17 February 2026 06:40:04 +0000 (0:00:01.138) 0:53:19.900 ****** 2026-02-17 06:40:39.303616 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303627 | orchestrator | 2026-02-17 06:40:39.303638 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:40:39.303649 | orchestrator | Tuesday 17 February 2026 06:40:05 +0000 (0:00:01.177) 0:53:21.078 ****** 2026-02-17 06:40:39.303660 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303671 | orchestrator | 2026-02-17 06:40:39.303682 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:40:39.303693 | orchestrator | Tuesday 17 February 2026 06:40:07 +0000 (0:00:01.229) 0:53:22.308 ****** 2026-02-17 06:40:39.303703 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.303714 | orchestrator | 2026-02-17 06:40:39.303764 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:40:39.303776 | orchestrator | Tuesday 17 February 2026 06:40:08 +0000 (0:00:01.206) 0:53:23.514 ****** 2026-02-17 06:40:39.303787 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.303798 | orchestrator | 2026-02-17 06:40:39.303809 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:40:39.303820 | orchestrator | Tuesday 17 February 2026 06:40:09 +0000 (0:00:01.144) 0:53:24.659 ****** 2026-02-17 06:40:39.303831 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303842 | orchestrator | 2026-02-17 06:40:39.303853 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:40:39.303864 | orchestrator | Tuesday 17 February 2026 06:40:10 +0000 (0:00:01.316) 0:53:25.976 ****** 2026-02-17 06:40:39.303875 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303886 | orchestrator | 2026-02-17 06:40:39.303897 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:40:39.303908 | orchestrator | Tuesday 17 February 2026 06:40:11 +0000 (0:00:01.195) 0:53:27.171 ****** 2026-02-17 06:40:39.303918 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303929 | orchestrator | 2026-02-17 06:40:39.303940 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:40:39.303951 | orchestrator | Tuesday 17 February 2026 06:40:13 +0000 (0:00:01.124) 0:53:28.296 ****** 2026-02-17 06:40:39.303962 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.303973 | orchestrator | 2026-02-17 06:40:39.303985 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:40:39.304013 | orchestrator | Tuesday 17 February 2026 06:40:14 +0000 (0:00:01.126) 0:53:29.422 ****** 2026-02-17 06:40:39.304025 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304036 | orchestrator | 2026-02-17 06:40:39.304047 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:40:39.304058 | orchestrator | Tuesday 17 February 2026 06:40:15 +0000 (0:00:01.184) 0:53:30.607 ****** 2026-02-17 06:40:39.304068 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304079 | orchestrator | 2026-02-17 06:40:39.304090 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:40:39.304130 | orchestrator | Tuesday 17 February 2026 06:40:16 +0000 (0:00:01.131) 0:53:31.738 ****** 2026-02-17 06:40:39.304142 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304153 | orchestrator | 2026-02-17 06:40:39.304164 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:40:39.304185 | orchestrator | Tuesday 17 February 2026 06:40:17 +0000 (0:00:01.162) 0:53:32.901 ****** 2026-02-17 06:40:39.304196 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304207 | orchestrator | 2026-02-17 06:40:39.304218 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:40:39.304229 | orchestrator | Tuesday 17 February 2026 06:40:18 +0000 (0:00:01.219) 0:53:34.120 ****** 2026-02-17 06:40:39.304240 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304250 | orchestrator | 2026-02-17 06:40:39.304261 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:40:39.304272 | orchestrator | Tuesday 17 February 2026 06:40:19 +0000 (0:00:01.150) 0:53:35.271 ****** 2026-02-17 06:40:39.304283 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304294 | orchestrator | 2026-02-17 06:40:39.304305 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:40:39.304316 | orchestrator | Tuesday 17 February 2026 06:40:21 +0000 (0:00:01.218) 0:53:36.489 ****** 2026-02-17 06:40:39.304327 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304338 | orchestrator | 2026-02-17 06:40:39.304349 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:40:39.304360 | orchestrator | Tuesday 17 February 2026 06:40:22 +0000 (0:00:01.177) 0:53:37.667 ****** 2026-02-17 06:40:39.304371 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304382 | orchestrator | 2026-02-17 06:40:39.304398 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:40:39.304409 | orchestrator | Tuesday 17 February 2026 06:40:23 +0000 (0:00:01.129) 0:53:38.797 ****** 2026-02-17 06:40:39.304420 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.304431 | orchestrator | 2026-02-17 06:40:39.304442 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:40:39.304453 | orchestrator | Tuesday 17 February 2026 06:40:25 +0000 (0:00:02.038) 0:53:40.836 ****** 2026-02-17 06:40:39.304463 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.304474 | orchestrator | 2026-02-17 06:40:39.304486 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:40:39.304497 | orchestrator | Tuesday 17 February 2026 06:40:27 +0000 (0:00:02.251) 0:53:43.087 ****** 2026-02-17 06:40:39.304507 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-17 06:40:39.304519 | orchestrator | 2026-02-17 06:40:39.304530 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:40:39.304541 | orchestrator | Tuesday 17 February 2026 06:40:29 +0000 (0:00:01.254) 0:53:44.341 ****** 2026-02-17 06:40:39.304552 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304563 | orchestrator | 2026-02-17 06:40:39.304574 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:40:39.304585 | orchestrator | Tuesday 17 February 2026 06:40:30 +0000 (0:00:01.129) 0:53:45.471 ****** 2026-02-17 06:40:39.304596 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304607 | orchestrator | 2026-02-17 06:40:39.304618 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:40:39.304629 | orchestrator | Tuesday 17 February 2026 06:40:31 +0000 (0:00:01.161) 0:53:46.632 ****** 2026-02-17 06:40:39.304640 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:40:39.304650 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:40:39.304661 | orchestrator | 2026-02-17 06:40:39.304672 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:40:39.304683 | orchestrator | Tuesday 17 February 2026 06:40:33 +0000 (0:00:01.818) 0:53:48.451 ****** 2026-02-17 06:40:39.304694 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:40:39.304704 | orchestrator | 2026-02-17 06:40:39.304715 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:40:39.304732 | orchestrator | Tuesday 17 February 2026 06:40:34 +0000 (0:00:01.418) 0:53:49.870 ****** 2026-02-17 06:40:39.304743 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304754 | orchestrator | 2026-02-17 06:40:39.304765 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:40:39.304776 | orchestrator | Tuesday 17 February 2026 06:40:35 +0000 (0:00:01.217) 0:53:51.087 ****** 2026-02-17 06:40:39.304787 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304798 | orchestrator | 2026-02-17 06:40:39.304809 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:40:39.304820 | orchestrator | Tuesday 17 February 2026 06:40:36 +0000 (0:00:01.180) 0:53:52.268 ****** 2026-02-17 06:40:39.304831 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:40:39.304842 | orchestrator | 2026-02-17 06:40:39.304853 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:40:39.304864 | orchestrator | Tuesday 17 February 2026 06:40:38 +0000 (0:00:01.172) 0:53:53.441 ****** 2026-02-17 06:40:39.304875 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-17 06:40:39.304885 | orchestrator | 2026-02-17 06:40:39.304897 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:40:39.304915 | orchestrator | Tuesday 17 February 2026 06:40:39 +0000 (0:00:01.117) 0:53:54.559 ****** 2026-02-17 06:41:26.436565 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:41:26.436718 | orchestrator | 2026-02-17 06:41:26.436750 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:41:26.436771 | orchestrator | Tuesday 17 February 2026 06:40:41 +0000 (0:00:01.761) 0:53:56.320 ****** 2026-02-17 06:41:26.436792 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:41:26.436805 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:41:26.436816 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:41:26.436827 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.436840 | orchestrator | 2026-02-17 06:41:26.436851 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:41:26.436862 | orchestrator | Tuesday 17 February 2026 06:40:42 +0000 (0:00:01.157) 0:53:57.478 ****** 2026-02-17 06:41:26.436873 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.436884 | orchestrator | 2026-02-17 06:41:26.436895 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:41:26.436906 | orchestrator | Tuesday 17 February 2026 06:40:43 +0000 (0:00:01.179) 0:53:58.658 ****** 2026-02-17 06:41:26.436917 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.436928 | orchestrator | 2026-02-17 06:41:26.436939 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:41:26.436950 | orchestrator | Tuesday 17 February 2026 06:40:44 +0000 (0:00:01.201) 0:53:59.860 ****** 2026-02-17 06:41:26.436961 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.436971 | orchestrator | 2026-02-17 06:41:26.436982 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:41:26.436993 | orchestrator | Tuesday 17 February 2026 06:40:45 +0000 (0:00:01.177) 0:54:01.038 ****** 2026-02-17 06:41:26.437004 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437015 | orchestrator | 2026-02-17 06:41:26.437026 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:41:26.437093 | orchestrator | Tuesday 17 February 2026 06:40:46 +0000 (0:00:01.162) 0:54:02.201 ****** 2026-02-17 06:41:26.437114 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437133 | orchestrator | 2026-02-17 06:41:26.437176 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:41:26.437190 | orchestrator | Tuesday 17 February 2026 06:40:48 +0000 (0:00:01.212) 0:54:03.413 ****** 2026-02-17 06:41:26.437205 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:41:26.437218 | orchestrator | 2026-02-17 06:41:26.437253 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:41:26.437265 | orchestrator | Tuesday 17 February 2026 06:40:50 +0000 (0:00:02.448) 0:54:05.862 ****** 2026-02-17 06:41:26.437278 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:41:26.437291 | orchestrator | 2026-02-17 06:41:26.437303 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:41:26.437315 | orchestrator | Tuesday 17 February 2026 06:40:51 +0000 (0:00:01.234) 0:54:07.096 ****** 2026-02-17 06:41:26.437328 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-17 06:41:26.437340 | orchestrator | 2026-02-17 06:41:26.437352 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:41:26.437364 | orchestrator | Tuesday 17 February 2026 06:40:52 +0000 (0:00:01.134) 0:54:08.231 ****** 2026-02-17 06:41:26.437377 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437389 | orchestrator | 2026-02-17 06:41:26.437401 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:41:26.437413 | orchestrator | Tuesday 17 February 2026 06:40:54 +0000 (0:00:01.181) 0:54:09.413 ****** 2026-02-17 06:41:26.437432 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437450 | orchestrator | 2026-02-17 06:41:26.437468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:41:26.437486 | orchestrator | Tuesday 17 February 2026 06:40:55 +0000 (0:00:01.157) 0:54:10.570 ****** 2026-02-17 06:41:26.437504 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437524 | orchestrator | 2026-02-17 06:41:26.437537 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:41:26.437547 | orchestrator | Tuesday 17 February 2026 06:40:56 +0000 (0:00:01.209) 0:54:11.779 ****** 2026-02-17 06:41:26.437558 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437569 | orchestrator | 2026-02-17 06:41:26.437580 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:41:26.437591 | orchestrator | Tuesday 17 February 2026 06:40:57 +0000 (0:00:01.238) 0:54:13.017 ****** 2026-02-17 06:41:26.437603 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437627 | orchestrator | 2026-02-17 06:41:26.437654 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:41:26.437671 | orchestrator | Tuesday 17 February 2026 06:40:58 +0000 (0:00:01.209) 0:54:14.227 ****** 2026-02-17 06:41:26.437688 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437706 | orchestrator | 2026-02-17 06:41:26.437725 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:41:26.437743 | orchestrator | Tuesday 17 February 2026 06:41:00 +0000 (0:00:01.147) 0:54:15.374 ****** 2026-02-17 06:41:26.437762 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437780 | orchestrator | 2026-02-17 06:41:26.437798 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:41:26.437815 | orchestrator | Tuesday 17 February 2026 06:41:01 +0000 (0:00:01.163) 0:54:16.537 ****** 2026-02-17 06:41:26.437834 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.437848 | orchestrator | 2026-02-17 06:41:26.437859 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:41:26.437870 | orchestrator | Tuesday 17 February 2026 06:41:02 +0000 (0:00:01.181) 0:54:17.719 ****** 2026-02-17 06:41:26.437881 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:41:26.437892 | orchestrator | 2026-02-17 06:41:26.437903 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:41:26.437934 | orchestrator | Tuesday 17 February 2026 06:41:03 +0000 (0:00:01.162) 0:54:18.881 ****** 2026-02-17 06:41:26.437946 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-17 06:41:26.437957 | orchestrator | 2026-02-17 06:41:26.437968 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:41:26.437979 | orchestrator | Tuesday 17 February 2026 06:41:04 +0000 (0:00:01.155) 0:54:20.037 ****** 2026-02-17 06:41:26.438004 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-17 06:41:26.438123 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-17 06:41:26.438143 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-17 06:41:26.438154 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-17 06:41:26.438165 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-17 06:41:26.438185 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-17 06:41:26.438196 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-17 06:41:26.438206 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:41:26.438217 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:41:26.438228 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:41:26.438239 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:41:26.438250 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:41:26.438261 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:41:26.438272 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:41:26.438283 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-17 06:41:26.438294 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-17 06:41:26.438304 | orchestrator | 2026-02-17 06:41:26.438315 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:41:26.438327 | orchestrator | Tuesday 17 February 2026 06:41:11 +0000 (0:00:06.531) 0:54:26.569 ****** 2026-02-17 06:41:26.438362 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-17 06:41:26.438387 | orchestrator | 2026-02-17 06:41:26.438405 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 06:41:26.438422 | orchestrator | Tuesday 17 February 2026 06:41:12 +0000 (0:00:01.228) 0:54:27.797 ****** 2026-02-17 06:41:26.438440 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:41:26.438459 | orchestrator | 2026-02-17 06:41:26.438477 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 06:41:26.438496 | orchestrator | Tuesday 17 February 2026 06:41:14 +0000 (0:00:01.483) 0:54:29.280 ****** 2026-02-17 06:41:26.438515 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:41:26.438529 | orchestrator | 2026-02-17 06:41:26.438540 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:41:26.438551 | orchestrator | Tuesday 17 February 2026 06:41:16 +0000 (0:00:02.012) 0:54:31.293 ****** 2026-02-17 06:41:26.438562 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.438573 | orchestrator | 2026-02-17 06:41:26.438584 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:41:26.438594 | orchestrator | Tuesday 17 February 2026 06:41:17 +0000 (0:00:01.143) 0:54:32.437 ****** 2026-02-17 06:41:26.438605 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.438623 | orchestrator | 2026-02-17 06:41:26.438644 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:41:26.438670 | orchestrator | Tuesday 17 February 2026 06:41:18 +0000 (0:00:01.137) 0:54:33.574 ****** 2026-02-17 06:41:26.438687 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.438704 | orchestrator | 2026-02-17 06:41:26.438721 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:41:26.438738 | orchestrator | Tuesday 17 February 2026 06:41:19 +0000 (0:00:01.188) 0:54:34.763 ****** 2026-02-17 06:41:26.438756 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.438774 | orchestrator | 2026-02-17 06:41:26.438792 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:41:26.438819 | orchestrator | Tuesday 17 February 2026 06:41:20 +0000 (0:00:01.261) 0:54:36.025 ****** 2026-02-17 06:41:26.438830 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.438841 | orchestrator | 2026-02-17 06:41:26.438852 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:41:26.438863 | orchestrator | Tuesday 17 February 2026 06:41:21 +0000 (0:00:01.147) 0:54:37.172 ****** 2026-02-17 06:41:26.438874 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.438885 | orchestrator | 2026-02-17 06:41:26.438896 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:41:26.438907 | orchestrator | Tuesday 17 February 2026 06:41:23 +0000 (0:00:01.143) 0:54:38.316 ****** 2026-02-17 06:41:26.438918 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.438929 | orchestrator | 2026-02-17 06:41:26.438940 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:41:26.438951 | orchestrator | Tuesday 17 February 2026 06:41:24 +0000 (0:00:01.123) 0:54:39.439 ****** 2026-02-17 06:41:26.438962 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.438972 | orchestrator | 2026-02-17 06:41:26.438983 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:41:26.438994 | orchestrator | Tuesday 17 February 2026 06:41:25 +0000 (0:00:01.132) 0:54:40.572 ****** 2026-02-17 06:41:26.439006 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:41:26.439017 | orchestrator | 2026-02-17 06:41:26.439070 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:42:22.216449 | orchestrator | Tuesday 17 February 2026 06:41:26 +0000 (0:00:01.117) 0:54:41.690 ****** 2026-02-17 06:42:22.216592 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.216612 | orchestrator | 2026-02-17 06:42:22.216626 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:42:22.216638 | orchestrator | Tuesday 17 February 2026 06:41:27 +0000 (0:00:01.135) 0:54:42.826 ****** 2026-02-17 06:42:22.216649 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.216661 | orchestrator | 2026-02-17 06:42:22.216672 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:42:22.216684 | orchestrator | Tuesday 17 February 2026 06:41:28 +0000 (0:00:01.198) 0:54:44.024 ****** 2026-02-17 06:42:22.216695 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:42:22.216706 | orchestrator | 2026-02-17 06:42:22.216717 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:42:22.216728 | orchestrator | Tuesday 17 February 2026 06:41:33 +0000 (0:00:04.279) 0:54:48.304 ****** 2026-02-17 06:42:22.216740 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:42:22.216752 | orchestrator | 2026-02-17 06:42:22.216763 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:42:22.216775 | orchestrator | Tuesday 17 February 2026 06:41:34 +0000 (0:00:01.246) 0:54:49.550 ****** 2026-02-17 06:42:22.216788 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-17 06:42:22.216822 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-17 06:42:22.216836 | orchestrator | 2026-02-17 06:42:22.216847 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:42:22.216882 | orchestrator | Tuesday 17 February 2026 06:41:38 +0000 (0:00:04.655) 0:54:54.206 ****** 2026-02-17 06:42:22.216893 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.216904 | orchestrator | 2026-02-17 06:42:22.216915 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:42:22.216926 | orchestrator | Tuesday 17 February 2026 06:41:40 +0000 (0:00:01.163) 0:54:55.370 ****** 2026-02-17 06:42:22.216937 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.216947 | orchestrator | 2026-02-17 06:42:22.216994 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:42:22.217008 | orchestrator | Tuesday 17 February 2026 06:41:41 +0000 (0:00:01.165) 0:54:56.535 ****** 2026-02-17 06:42:22.217020 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.217033 | orchestrator | 2026-02-17 06:42:22.217046 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:42:22.217059 | orchestrator | Tuesday 17 February 2026 06:41:42 +0000 (0:00:01.174) 0:54:57.710 ****** 2026-02-17 06:42:22.217071 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.217084 | orchestrator | 2026-02-17 06:42:22.217096 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:42:22.217108 | orchestrator | Tuesday 17 February 2026 06:41:43 +0000 (0:00:01.174) 0:54:58.884 ****** 2026-02-17 06:42:22.217120 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.217132 | orchestrator | 2026-02-17 06:42:22.217144 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:42:22.217156 | orchestrator | Tuesday 17 February 2026 06:41:44 +0000 (0:00:01.164) 0:55:00.049 ****** 2026-02-17 06:42:22.217169 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.217182 | orchestrator | 2026-02-17 06:42:22.217195 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:42:22.217207 | orchestrator | Tuesday 17 February 2026 06:41:46 +0000 (0:00:01.254) 0:55:01.304 ****** 2026-02-17 06:42:22.217220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:42:22.217232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:42:22.217245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:42:22.217257 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.217269 | orchestrator | 2026-02-17 06:42:22.217282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:42:22.217294 | orchestrator | Tuesday 17 February 2026 06:41:47 +0000 (0:00:01.460) 0:55:02.764 ****** 2026-02-17 06:42:22.217307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:42:22.217319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:42:22.217332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:42:22.217344 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.217355 | orchestrator | 2026-02-17 06:42:22.217366 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:42:22.217377 | orchestrator | Tuesday 17 February 2026 06:41:48 +0000 (0:00:01.459) 0:55:04.224 ****** 2026-02-17 06:42:22.217387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:42:22.217398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:42:22.217409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:42:22.217437 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.217448 | orchestrator | 2026-02-17 06:42:22.217459 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:42:22.217470 | orchestrator | Tuesday 17 February 2026 06:41:50 +0000 (0:00:01.516) 0:55:05.741 ****** 2026-02-17 06:42:22.217481 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.217491 | orchestrator | 2026-02-17 06:42:22.217502 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:42:22.217513 | orchestrator | Tuesday 17 February 2026 06:41:51 +0000 (0:00:01.209) 0:55:06.950 ****** 2026-02-17 06:42:22.217532 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 06:42:22.217543 | orchestrator | 2026-02-17 06:42:22.217553 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:42:22.217564 | orchestrator | Tuesday 17 February 2026 06:41:53 +0000 (0:00:01.865) 0:55:08.816 ****** 2026-02-17 06:42:22.217615 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.217642 | orchestrator | 2026-02-17 06:42:22.217665 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-17 06:42:22.217676 | orchestrator | Tuesday 17 February 2026 06:41:55 +0000 (0:00:01.774) 0:55:10.590 ****** 2026-02-17 06:42:22.217687 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.217697 | orchestrator | 2026-02-17 06:42:22.217708 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-17 06:42:22.217719 | orchestrator | Tuesday 17 February 2026 06:41:56 +0000 (0:00:01.169) 0:55:11.760 ****** 2026-02-17 06:42:22.217730 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3 2026-02-17 06:42:22.217741 | orchestrator | 2026-02-17 06:42:22.217752 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-17 06:42:22.217763 | orchestrator | Tuesday 17 February 2026 06:41:58 +0000 (0:00:01.525) 0:55:13.285 ****** 2026-02-17 06:42:22.217773 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-17 06:42:22.217784 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-17 06:42:22.217795 | orchestrator | 2026-02-17 06:42:22.217806 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-17 06:42:22.217823 | orchestrator | Tuesday 17 February 2026 06:41:59 +0000 (0:00:01.806) 0:55:15.092 ****** 2026-02-17 06:42:22.217834 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:42:22.217845 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 06:42:22.217856 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:42:22.217867 | orchestrator | 2026-02-17 06:42:22.217878 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:42:22.217889 | orchestrator | Tuesday 17 February 2026 06:42:02 +0000 (0:00:03.148) 0:55:18.241 ****** 2026-02-17 06:42:22.217900 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-17 06:42:22.217911 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 06:42:22.217921 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.217932 | orchestrator | 2026-02-17 06:42:22.217943 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-17 06:42:22.217981 | orchestrator | Tuesday 17 February 2026 06:42:04 +0000 (0:00:01.955) 0:55:20.197 ****** 2026-02-17 06:42:22.218001 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.218088 | orchestrator | 2026-02-17 06:42:22.218103 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-17 06:42:22.218114 | orchestrator | Tuesday 17 February 2026 06:42:06 +0000 (0:00:01.563) 0:55:21.760 ****** 2026-02-17 06:42:22.218125 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:22.218136 | orchestrator | 2026-02-17 06:42:22.218147 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-17 06:42:22.218157 | orchestrator | Tuesday 17 February 2026 06:42:07 +0000 (0:00:01.192) 0:55:22.952 ****** 2026-02-17 06:42:22.218168 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3 2026-02-17 06:42:22.218180 | orchestrator | 2026-02-17 06:42:22.218191 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-17 06:42:22.218201 | orchestrator | Tuesday 17 February 2026 06:42:09 +0000 (0:00:01.480) 0:55:24.433 ****** 2026-02-17 06:42:22.218212 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3 2026-02-17 06:42:22.218223 | orchestrator | 2026-02-17 06:42:22.218233 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-17 06:42:22.218254 | orchestrator | Tuesday 17 February 2026 06:42:10 +0000 (0:00:01.585) 0:55:26.018 ****** 2026-02-17 06:42:22.218265 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.218275 | orchestrator | 2026-02-17 06:42:22.218286 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-17 06:42:22.218297 | orchestrator | Tuesday 17 February 2026 06:42:12 +0000 (0:00:02.058) 0:55:28.076 ****** 2026-02-17 06:42:22.218308 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.218319 | orchestrator | 2026-02-17 06:42:22.218330 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-17 06:42:22.218341 | orchestrator | Tuesday 17 February 2026 06:42:14 +0000 (0:00:02.030) 0:55:30.107 ****** 2026-02-17 06:42:22.218352 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.218362 | orchestrator | 2026-02-17 06:42:22.218373 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-17 06:42:22.218384 | orchestrator | Tuesday 17 February 2026 06:42:17 +0000 (0:00:02.313) 0:55:32.421 ****** 2026-02-17 06:42:22.218395 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.218405 | orchestrator | 2026-02-17 06:42:22.218416 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-17 06:42:22.218427 | orchestrator | Tuesday 17 February 2026 06:42:19 +0000 (0:00:02.274) 0:55:34.696 ****** 2026-02-17 06:42:22.218438 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:22.218449 | orchestrator | 2026-02-17 06:42:22.218460 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-17 06:42:22.218471 | orchestrator | Tuesday 17 February 2026 06:42:21 +0000 (0:00:01.618) 0:55:36.315 ****** 2026-02-17 06:42:22.218493 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:42:56.931300 | orchestrator | 2026-02-17 06:42:56.931444 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-17 06:42:56.931470 | orchestrator | Tuesday 17 February 2026 06:42:22 +0000 (0:00:01.157) 0:55:37.473 ****** 2026-02-17 06:42:56.931490 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:42:56.931510 | orchestrator | 2026-02-17 06:42:56.931528 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-17 06:42:56.931548 | orchestrator | 2026-02-17 06:42:56.931567 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:42:56.931586 | orchestrator | Tuesday 17 February 2026 06:42:32 +0000 (0:00:09.854) 0:55:47.327 ****** 2026-02-17 06:42:56.931605 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-5 2026-02-17 06:42:56.931624 | orchestrator | 2026-02-17 06:42:56.931642 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:42:56.931661 | orchestrator | Tuesday 17 February 2026 06:42:33 +0000 (0:00:01.573) 0:55:48.900 ****** 2026-02-17 06:42:56.931680 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:42:56.931698 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:42:56.931717 | orchestrator | 2026-02-17 06:42:56.931736 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:42:56.931754 | orchestrator | Tuesday 17 February 2026 06:42:35 +0000 (0:00:01.592) 0:55:50.493 ****** 2026-02-17 06:42:56.931772 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:42:56.931791 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:42:56.931810 | orchestrator | 2026-02-17 06:42:56.931830 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:42:56.931849 | orchestrator | Tuesday 17 February 2026 06:42:36 +0000 (0:00:01.308) 0:55:51.802 ****** 2026-02-17 06:42:56.931868 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:42:56.931888 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:42:56.931937 | orchestrator | 2026-02-17 06:42:56.931958 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:42:56.931977 | orchestrator | Tuesday 17 February 2026 06:42:38 +0000 (0:00:01.649) 0:55:53.451 ****** 2026-02-17 06:42:56.931996 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:42:56.932016 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:42:56.932035 | orchestrator | 2026-02-17 06:42:56.932104 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:42:56.932126 | orchestrator | Tuesday 17 February 2026 06:42:39 +0000 (0:00:01.296) 0:55:54.748 ****** 2026-02-17 06:42:56.932145 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:42:56.932165 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:42:56.932184 | orchestrator | 2026-02-17 06:42:56.932204 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:42:56.932223 | orchestrator | Tuesday 17 February 2026 06:42:40 +0000 (0:00:01.273) 0:55:56.022 ****** 2026-02-17 06:42:56.932242 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:42:56.932261 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:42:56.932281 | orchestrator | 2026-02-17 06:42:56.932300 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:42:56.932318 | orchestrator | Tuesday 17 February 2026 06:42:42 +0000 (0:00:01.299) 0:55:57.321 ****** 2026-02-17 06:42:56.932336 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:42:56.932355 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:42:56.932375 | orchestrator | 2026-02-17 06:42:56.932396 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:42:56.932415 | orchestrator | Tuesday 17 February 2026 06:42:43 +0000 (0:00:01.250) 0:55:58.572 ****** 2026-02-17 06:42:56.932434 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:42:56.932452 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:42:56.932471 | orchestrator | 2026-02-17 06:42:56.932491 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:42:56.932512 | orchestrator | Tuesday 17 February 2026 06:42:44 +0000 (0:00:01.283) 0:55:59.855 ****** 2026-02-17 06:42:56.932532 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:42:56.932552 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:42:56.932572 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:42:56.932592 | orchestrator | 2026-02-17 06:42:56.932612 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:42:56.932632 | orchestrator | Tuesday 17 February 2026 06:42:46 +0000 (0:00:01.705) 0:56:01.561 ****** 2026-02-17 06:42:56.932652 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:42:56.932673 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:42:56.932693 | orchestrator | 2026-02-17 06:42:56.932713 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:42:56.932733 | orchestrator | Tuesday 17 February 2026 06:42:47 +0000 (0:00:01.445) 0:56:03.006 ****** 2026-02-17 06:42:56.932754 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:42:56.932774 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:42:56.932795 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:42:56.932815 | orchestrator | 2026-02-17 06:42:56.932835 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:42:56.932855 | orchestrator | Tuesday 17 February 2026 06:42:51 +0000 (0:00:03.303) 0:56:06.310 ****** 2026-02-17 06:42:56.932875 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 06:42:56.932895 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 06:42:56.932940 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 06:42:56.932960 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:42:56.932981 | orchestrator | 2026-02-17 06:42:56.933002 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:42:56.933022 | orchestrator | Tuesday 17 February 2026 06:42:52 +0000 (0:00:01.458) 0:56:07.769 ****** 2026-02-17 06:42:56.933072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:42:56.933117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:42:56.933139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:42:56.933160 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:42:56.933183 | orchestrator | 2026-02-17 06:42:56.933203 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:42:56.933224 | orchestrator | Tuesday 17 February 2026 06:42:54 +0000 (0:00:02.000) 0:56:09.770 ****** 2026-02-17 06:42:56.933247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:42:56.933282 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:42:56.933304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:42:56.933324 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:42:56.933343 | orchestrator | 2026-02-17 06:42:56.933364 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:42:56.933385 | orchestrator | Tuesday 17 February 2026 06:42:55 +0000 (0:00:01.203) 0:56:10.974 ****** 2026-02-17 06:42:56.933407 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:42:48.281843', 'end': '2026-02-17 06:42:48.337543', 'delta': '0:00:00.055700', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:42:56.933433 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:42:48.876009', 'end': '2026-02-17 06:42:48.927242', 'delta': '0:00:00.051233', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:42:56.933479 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:42:49.776179', 'end': '2026-02-17 06:42:49.820793', 'delta': '0:00:00.044614', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:43:16.709263 | orchestrator | 2026-02-17 06:43:16.709437 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:43:16.709466 | orchestrator | Tuesday 17 February 2026 06:42:56 +0000 (0:00:01.208) 0:56:12.182 ****** 2026-02-17 06:43:16.709486 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:16.709503 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:16.709520 | orchestrator | 2026-02-17 06:43:16.709538 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:43:16.709556 | orchestrator | Tuesday 17 February 2026 06:42:58 +0000 (0:00:01.829) 0:56:14.011 ****** 2026-02-17 06:43:16.709587 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:16.709607 | orchestrator | 2026-02-17 06:43:16.709626 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:43:16.709643 | orchestrator | Tuesday 17 February 2026 06:42:59 +0000 (0:00:01.225) 0:56:15.237 ****** 2026-02-17 06:43:16.709663 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:16.709680 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:16.709699 | orchestrator | 2026-02-17 06:43:16.709716 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:43:16.709734 | orchestrator | Tuesday 17 February 2026 06:43:01 +0000 (0:00:01.282) 0:56:16.519 ****** 2026-02-17 06:43:16.709753 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:43:16.709770 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:43:16.709789 | orchestrator | 2026-02-17 06:43:16.709830 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:43:16.709851 | orchestrator | Tuesday 17 February 2026 06:43:03 +0000 (0:00:02.195) 0:56:18.715 ****** 2026-02-17 06:43:16.709870 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:16.709940 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:16.709960 | orchestrator | 2026-02-17 06:43:16.709979 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:43:16.709998 | orchestrator | Tuesday 17 February 2026 06:43:04 +0000 (0:00:01.308) 0:56:20.024 ****** 2026-02-17 06:43:16.710089 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:16.710111 | orchestrator | 2026-02-17 06:43:16.710130 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:43:16.710150 | orchestrator | Tuesday 17 February 2026 06:43:05 +0000 (0:00:01.161) 0:56:21.186 ****** 2026-02-17 06:43:16.710168 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:16.710200 | orchestrator | 2026-02-17 06:43:16.710219 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:43:16.710237 | orchestrator | Tuesday 17 February 2026 06:43:07 +0000 (0:00:01.226) 0:56:22.413 ****** 2026-02-17 06:43:16.710259 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:16.710272 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:16.710283 | orchestrator | 2026-02-17 06:43:16.710294 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:43:16.710305 | orchestrator | Tuesday 17 February 2026 06:43:08 +0000 (0:00:01.310) 0:56:23.723 ****** 2026-02-17 06:43:16.710317 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:16.710357 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:16.710369 | orchestrator | 2026-02-17 06:43:16.710380 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:43:16.710391 | orchestrator | Tuesday 17 February 2026 06:43:09 +0000 (0:00:01.539) 0:56:25.263 ****** 2026-02-17 06:43:16.710401 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:16.710412 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:16.710423 | orchestrator | 2026-02-17 06:43:16.710434 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:43:16.710445 | orchestrator | Tuesday 17 February 2026 06:43:11 +0000 (0:00:01.339) 0:56:26.603 ****** 2026-02-17 06:43:16.710456 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:16.710466 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:16.710477 | orchestrator | 2026-02-17 06:43:16.710488 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:43:16.710499 | orchestrator | Tuesday 17 February 2026 06:43:12 +0000 (0:00:01.314) 0:56:27.918 ****** 2026-02-17 06:43:16.710510 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:16.710521 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:16.710531 | orchestrator | 2026-02-17 06:43:16.710542 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:43:16.710553 | orchestrator | Tuesday 17 February 2026 06:43:13 +0000 (0:00:01.245) 0:56:29.164 ****** 2026-02-17 06:43:16.710564 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:16.710574 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:16.710585 | orchestrator | 2026-02-17 06:43:16.710596 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:43:16.710607 | orchestrator | Tuesday 17 February 2026 06:43:15 +0000 (0:00:01.270) 0:56:30.434 ****** 2026-02-17 06:43:16.710618 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:16.710629 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:16.710640 | orchestrator | 2026-02-17 06:43:16.710651 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:43:16.710662 | orchestrator | Tuesday 17 February 2026 06:43:16 +0000 (0:00:01.305) 0:56:31.740 ****** 2026-02-17 06:43:16.710675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.710715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'uuids': ['dab48e76-bd26-40e2-b056-8f58a903c67b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL']}})  2026-02-17 06:43:16.710739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd9c05b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:43:16.710753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b']}})  2026-02-17 06:43:16.710775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.710787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.710800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:43:16.710812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.710832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08', 'dm-uuid-CRYPT-LUKS2-40a19dfb08344771a8e6cfe7009b1e1d-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:43:16.814818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.814993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'uuids': ['40a19dfb-0834-4771-a8e6-cfe7009b1e1d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08']}})  2026-02-17 06:43:16.815038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0']}})  2026-02-17 06:43:16.815053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.815066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.815078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'uuids': ['4833064e-8ca1-479d-a0c0-581ea0d1065c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7']}})  2026-02-17 06:43:16.815120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '95350bd6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:43:16.815144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b093f3ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:43:16.815157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.815169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee']}})  2026-02-17 06:43:16.815180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:16.815200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL', 'dm-uuid-CRYPT-LUKS2-dab48e76bd2640e2b0568f58a903c67b-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:43:18.107492 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:18.107505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV', 'dm-uuid-CRYPT-LUKS2-f004f31e7c734e098d3470dc55158438-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'uuids': ['f004f31e-7c73-4e09-8d34-70dc55158438'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV']}})  2026-02-17 06:43:18.107574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841']}})  2026-02-17 06:43:18.107600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37d8f58a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:43:18.107630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:43:18.107662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7', 'dm-uuid-CRYPT-LUKS2-4833064e8ca1479da0c0581ea0d1065c-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:43:18.371195 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:18.371282 | orchestrator | 2026-02-17 06:43:18.371293 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:43:18.371303 | orchestrator | Tuesday 17 February 2026 06:43:18 +0000 (0:00:01.622) 0:56:33.362 ****** 2026-02-17 06:43:18.371329 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'uuids': ['dab48e76-bd26-40e2-b056-8f58a903c67b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371374 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd9c05b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371385 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371427 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371450 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08', 'dm-uuid-CRYPT-LUKS2-40a19dfb08344771a8e6cfe7009b1e1d-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.371496 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'uuids': ['40a19dfb-0834-4771-a8e6-cfe7009b1e1d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446211 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'uuids': ['4833064e-8ca1-479d-a0c0-581ea0d1065c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446321 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446348 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b093f3ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '95350bd6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446391 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:18.446425 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.748766 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.748866 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL', 'dm-uuid-CRYPT-LUKS2-dab48e76bd2640e2b0568f58a903c67b-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.748927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.748941 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:19.748956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.748991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.749035 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV', 'dm-uuid-CRYPT-LUKS2-f004f31e7c734e098d3470dc55158438-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.749048 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.749061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'uuids': ['f004f31e-7c73-4e09-8d34-70dc55158438'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.749074 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.749098 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:19.749127 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37d8f58a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:47.537976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:47.538140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:47.538158 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7', 'dm-uuid-CRYPT-LUKS2-4833064e8ca1479da0c0581ea0d1065c-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:43:47.538170 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:47.538182 | orchestrator | 2026-02-17 06:43:47.538194 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:43:47.538205 | orchestrator | Tuesday 17 February 2026 06:43:19 +0000 (0:00:01.644) 0:56:35.007 ****** 2026-02-17 06:43:47.538214 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:47.538225 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:47.538235 | orchestrator | 2026-02-17 06:43:47.538246 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:43:47.538256 | orchestrator | Tuesday 17 February 2026 06:43:21 +0000 (0:00:01.634) 0:56:36.641 ****** 2026-02-17 06:43:47.538266 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:47.538287 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:47.538298 | orchestrator | 2026-02-17 06:43:47.538308 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:43:47.538317 | orchestrator | Tuesday 17 February 2026 06:43:22 +0000 (0:00:01.304) 0:56:37.946 ****** 2026-02-17 06:43:47.538327 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:47.538337 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:47.538347 | orchestrator | 2026-02-17 06:43:47.538357 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:43:47.538366 | orchestrator | Tuesday 17 February 2026 06:43:24 +0000 (0:00:01.630) 0:56:39.576 ****** 2026-02-17 06:43:47.538376 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.538386 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:47.538396 | orchestrator | 2026-02-17 06:43:47.538406 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:43:47.538415 | orchestrator | Tuesday 17 February 2026 06:43:25 +0000 (0:00:01.313) 0:56:40.890 ****** 2026-02-17 06:43:47.538425 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.538435 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:47.538445 | orchestrator | 2026-02-17 06:43:47.538455 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:43:47.538465 | orchestrator | Tuesday 17 February 2026 06:43:26 +0000 (0:00:01.352) 0:56:42.243 ****** 2026-02-17 06:43:47.538474 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.538484 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:47.538494 | orchestrator | 2026-02-17 06:43:47.538504 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:43:47.538514 | orchestrator | Tuesday 17 February 2026 06:43:28 +0000 (0:00:01.652) 0:56:43.895 ****** 2026-02-17 06:43:47.538524 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-17 06:43:47.538543 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-17 06:43:47.538554 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-17 06:43:47.538565 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-17 06:43:47.538576 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-17 06:43:47.538587 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-17 06:43:47.538598 | orchestrator | 2026-02-17 06:43:47.538609 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:43:47.538620 | orchestrator | Tuesday 17 February 2026 06:43:30 +0000 (0:00:01.878) 0:56:45.774 ****** 2026-02-17 06:43:47.538648 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 06:43:47.538660 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 06:43:47.538671 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 06:43:47.538682 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.538692 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 06:43:47.538703 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 06:43:47.538714 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 06:43:47.538725 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:47.538737 | orchestrator | 2026-02-17 06:43:47.538748 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:43:47.538759 | orchestrator | Tuesday 17 February 2026 06:43:31 +0000 (0:00:01.365) 0:56:47.139 ****** 2026-02-17 06:43:47.538771 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-5 2026-02-17 06:43:47.538783 | orchestrator | 2026-02-17 06:43:47.538794 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:43:47.538807 | orchestrator | Tuesday 17 February 2026 06:43:33 +0000 (0:00:01.248) 0:56:48.388 ****** 2026-02-17 06:43:47.538818 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.538829 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:47.538840 | orchestrator | 2026-02-17 06:43:47.538875 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:43:47.538886 | orchestrator | Tuesday 17 February 2026 06:43:34 +0000 (0:00:01.278) 0:56:49.666 ****** 2026-02-17 06:43:47.538897 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.538908 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:47.538940 | orchestrator | 2026-02-17 06:43:47.538950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:43:47.538960 | orchestrator | Tuesday 17 February 2026 06:43:35 +0000 (0:00:01.340) 0:56:51.007 ****** 2026-02-17 06:43:47.538970 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.538979 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:43:47.538989 | orchestrator | 2026-02-17 06:43:47.538998 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:43:47.539008 | orchestrator | Tuesday 17 February 2026 06:43:36 +0000 (0:00:01.237) 0:56:52.244 ****** 2026-02-17 06:43:47.539018 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:47.539027 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:47.539037 | orchestrator | 2026-02-17 06:43:47.539046 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:43:47.539056 | orchestrator | Tuesday 17 February 2026 06:43:38 +0000 (0:00:01.378) 0:56:53.622 ****** 2026-02-17 06:43:47.539066 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:43:47.539075 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:43:47.539085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:43:47.539095 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.539104 | orchestrator | 2026-02-17 06:43:47.539114 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:43:47.539130 | orchestrator | Tuesday 17 February 2026 06:43:39 +0000 (0:00:01.440) 0:56:55.063 ****** 2026-02-17 06:43:47.539140 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:43:47.539149 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:43:47.539164 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:43:47.539174 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.539184 | orchestrator | 2026-02-17 06:43:47.539193 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:43:47.539203 | orchestrator | Tuesday 17 February 2026 06:43:41 +0000 (0:00:01.397) 0:56:56.461 ****** 2026-02-17 06:43:47.539213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:43:47.539223 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:43:47.539232 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:43:47.539242 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:43:47.539251 | orchestrator | 2026-02-17 06:43:47.539261 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:43:47.539270 | orchestrator | Tuesday 17 February 2026 06:43:42 +0000 (0:00:01.400) 0:56:57.861 ****** 2026-02-17 06:43:47.539280 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:43:47.539289 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:43:47.539299 | orchestrator | 2026-02-17 06:43:47.539308 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:43:47.539318 | orchestrator | Tuesday 17 February 2026 06:43:43 +0000 (0:00:01.303) 0:56:59.165 ****** 2026-02-17 06:43:47.539328 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 06:43:47.539337 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 06:43:47.539347 | orchestrator | 2026-02-17 06:43:47.539357 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:43:47.539366 | orchestrator | Tuesday 17 February 2026 06:43:45 +0000 (0:00:01.457) 0:57:00.622 ****** 2026-02-17 06:43:47.539376 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:43:47.539386 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:43:47.539395 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:43:47.539405 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:43:47.539414 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-17 06:43:47.539424 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:43:47.539440 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:44:32.723169 | orchestrator | 2026-02-17 06:44:32.723283 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:44:32.723300 | orchestrator | Tuesday 17 February 2026 06:43:47 +0000 (0:00:02.165) 0:57:02.787 ****** 2026-02-17 06:44:32.723312 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:44:32.723324 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:44:32.723335 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:44:32.723346 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:44:32.723358 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-17 06:44:32.723369 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:44:32.723381 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:44:32.723391 | orchestrator | 2026-02-17 06:44:32.723403 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-17 06:44:32.723436 | orchestrator | Tuesday 17 February 2026 06:43:50 +0000 (0:00:03.171) 0:57:05.959 ****** 2026-02-17 06:44:32.723447 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.723459 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.723470 | orchestrator | 2026-02-17 06:44:32.723481 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:44:32.723492 | orchestrator | Tuesday 17 February 2026 06:43:51 +0000 (0:00:01.290) 0:57:07.250 ****** 2026-02-17 06:44:32.723503 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-5 2026-02-17 06:44:32.723514 | orchestrator | 2026-02-17 06:44:32.723525 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:44:32.723536 | orchestrator | Tuesday 17 February 2026 06:43:53 +0000 (0:00:01.224) 0:57:08.474 ****** 2026-02-17 06:44:32.723547 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-5 2026-02-17 06:44:32.723558 | orchestrator | 2026-02-17 06:44:32.723569 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:44:32.723580 | orchestrator | Tuesday 17 February 2026 06:43:54 +0000 (0:00:01.229) 0:57:09.704 ****** 2026-02-17 06:44:32.723591 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.723603 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.723614 | orchestrator | 2026-02-17 06:44:32.723625 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:44:32.723636 | orchestrator | Tuesday 17 February 2026 06:43:55 +0000 (0:00:01.281) 0:57:10.985 ****** 2026-02-17 06:44:32.723647 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.723658 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.723669 | orchestrator | 2026-02-17 06:44:32.723680 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:44:32.723691 | orchestrator | Tuesday 17 February 2026 06:43:57 +0000 (0:00:01.636) 0:57:12.622 ****** 2026-02-17 06:44:32.723705 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.723718 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.723731 | orchestrator | 2026-02-17 06:44:32.723743 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:44:32.723771 | orchestrator | Tuesday 17 February 2026 06:43:59 +0000 (0:00:01.669) 0:57:14.291 ****** 2026-02-17 06:44:32.723784 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.723824 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.723844 | orchestrator | 2026-02-17 06:44:32.723864 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:44:32.723883 | orchestrator | Tuesday 17 February 2026 06:44:00 +0000 (0:00:01.646) 0:57:15.938 ****** 2026-02-17 06:44:32.723901 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.723914 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.723926 | orchestrator | 2026-02-17 06:44:32.723939 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:44:32.723952 | orchestrator | Tuesday 17 February 2026 06:44:01 +0000 (0:00:01.229) 0:57:17.167 ****** 2026-02-17 06:44:32.723965 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.723977 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.723990 | orchestrator | 2026-02-17 06:44:32.724003 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:44:32.724015 | orchestrator | Tuesday 17 February 2026 06:44:03 +0000 (0:00:01.247) 0:57:18.414 ****** 2026-02-17 06:44:32.724027 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724040 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724052 | orchestrator | 2026-02-17 06:44:32.724066 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:44:32.724077 | orchestrator | Tuesday 17 February 2026 06:44:04 +0000 (0:00:01.222) 0:57:19.637 ****** 2026-02-17 06:44:32.724088 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.724099 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.724110 | orchestrator | 2026-02-17 06:44:32.724121 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:44:32.724141 | orchestrator | Tuesday 17 February 2026 06:44:06 +0000 (0:00:02.058) 0:57:21.695 ****** 2026-02-17 06:44:32.724152 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.724163 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.724174 | orchestrator | 2026-02-17 06:44:32.724185 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:44:32.724195 | orchestrator | Tuesday 17 February 2026 06:44:08 +0000 (0:00:01.719) 0:57:23.415 ****** 2026-02-17 06:44:32.724206 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724217 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724228 | orchestrator | 2026-02-17 06:44:32.724239 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:44:32.724250 | orchestrator | Tuesday 17 February 2026 06:44:09 +0000 (0:00:01.219) 0:57:24.634 ****** 2026-02-17 06:44:32.724261 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724290 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724301 | orchestrator | 2026-02-17 06:44:32.724312 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:44:32.724323 | orchestrator | Tuesday 17 February 2026 06:44:10 +0000 (0:00:01.256) 0:57:25.891 ****** 2026-02-17 06:44:32.724334 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.724345 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.724355 | orchestrator | 2026-02-17 06:44:32.724366 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:44:32.724377 | orchestrator | Tuesday 17 February 2026 06:44:11 +0000 (0:00:01.270) 0:57:27.162 ****** 2026-02-17 06:44:32.724388 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.724399 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.724410 | orchestrator | 2026-02-17 06:44:32.724421 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:44:32.724431 | orchestrator | Tuesday 17 February 2026 06:44:13 +0000 (0:00:01.257) 0:57:28.419 ****** 2026-02-17 06:44:32.724442 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.724453 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.724464 | orchestrator | 2026-02-17 06:44:32.724475 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:44:32.724486 | orchestrator | Tuesday 17 February 2026 06:44:14 +0000 (0:00:01.329) 0:57:29.749 ****** 2026-02-17 06:44:32.724497 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724508 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724518 | orchestrator | 2026-02-17 06:44:32.724530 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:44:32.724540 | orchestrator | Tuesday 17 February 2026 06:44:15 +0000 (0:00:01.243) 0:57:30.993 ****** 2026-02-17 06:44:32.724551 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724562 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724573 | orchestrator | 2026-02-17 06:44:32.724584 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:44:32.724595 | orchestrator | Tuesday 17 February 2026 06:44:16 +0000 (0:00:01.259) 0:57:32.252 ****** 2026-02-17 06:44:32.724606 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724617 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724628 | orchestrator | 2026-02-17 06:44:32.724639 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:44:32.724650 | orchestrator | Tuesday 17 February 2026 06:44:18 +0000 (0:00:01.232) 0:57:33.485 ****** 2026-02-17 06:44:32.724661 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.724672 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.724682 | orchestrator | 2026-02-17 06:44:32.724693 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:44:32.724704 | orchestrator | Tuesday 17 February 2026 06:44:19 +0000 (0:00:01.299) 0:57:34.785 ****** 2026-02-17 06:44:32.724715 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:44:32.724726 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:44:32.724743 | orchestrator | 2026-02-17 06:44:32.724755 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:44:32.724766 | orchestrator | Tuesday 17 February 2026 06:44:20 +0000 (0:00:01.365) 0:57:36.150 ****** 2026-02-17 06:44:32.724777 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724788 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724826 | orchestrator | 2026-02-17 06:44:32.724837 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:44:32.724848 | orchestrator | Tuesday 17 February 2026 06:44:22 +0000 (0:00:01.292) 0:57:37.443 ****** 2026-02-17 06:44:32.724859 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724870 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724881 | orchestrator | 2026-02-17 06:44:32.724898 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:44:32.724909 | orchestrator | Tuesday 17 February 2026 06:44:23 +0000 (0:00:01.351) 0:57:38.795 ****** 2026-02-17 06:44:32.724920 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724930 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724941 | orchestrator | 2026-02-17 06:44:32.724952 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:44:32.724963 | orchestrator | Tuesday 17 February 2026 06:44:24 +0000 (0:00:01.274) 0:57:40.070 ****** 2026-02-17 06:44:32.724974 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.724984 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.724995 | orchestrator | 2026-02-17 06:44:32.725006 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:44:32.725017 | orchestrator | Tuesday 17 February 2026 06:44:26 +0000 (0:00:01.302) 0:57:41.372 ****** 2026-02-17 06:44:32.725028 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.725039 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.725049 | orchestrator | 2026-02-17 06:44:32.725060 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:44:32.725071 | orchestrator | Tuesday 17 February 2026 06:44:27 +0000 (0:00:01.246) 0:57:42.619 ****** 2026-02-17 06:44:32.725082 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.725093 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.725104 | orchestrator | 2026-02-17 06:44:32.725115 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:44:32.725126 | orchestrator | Tuesday 17 February 2026 06:44:28 +0000 (0:00:01.296) 0:57:43.915 ****** 2026-02-17 06:44:32.725137 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.725148 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.725158 | orchestrator | 2026-02-17 06:44:32.725169 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:44:32.725186 | orchestrator | Tuesday 17 February 2026 06:44:29 +0000 (0:00:01.246) 0:57:45.161 ****** 2026-02-17 06:44:32.725204 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.725223 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.725240 | orchestrator | 2026-02-17 06:44:32.725258 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:44:32.725276 | orchestrator | Tuesday 17 February 2026 06:44:31 +0000 (0:00:01.576) 0:57:46.738 ****** 2026-02-17 06:44:32.725292 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:44:32.725308 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:44:32.725325 | orchestrator | 2026-02-17 06:44:32.725384 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:45:18.195739 | orchestrator | Tuesday 17 February 2026 06:44:32 +0000 (0:00:01.240) 0:57:47.978 ****** 2026-02-17 06:45:18.195905 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.195921 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.195933 | orchestrator | 2026-02-17 06:45:18.195946 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:45:18.195958 | orchestrator | Tuesday 17 February 2026 06:44:33 +0000 (0:00:01.266) 0:57:49.245 ****** 2026-02-17 06:45:18.195992 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.196004 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.196015 | orchestrator | 2026-02-17 06:45:18.196027 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:45:18.196038 | orchestrator | Tuesday 17 February 2026 06:44:35 +0000 (0:00:01.267) 0:57:50.512 ****** 2026-02-17 06:45:18.196049 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.196060 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.196071 | orchestrator | 2026-02-17 06:45:18.196082 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:45:18.196093 | orchestrator | Tuesday 17 February 2026 06:44:36 +0000 (0:00:01.228) 0:57:51.741 ****** 2026-02-17 06:45:18.196104 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:45:18.196116 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:45:18.196127 | orchestrator | 2026-02-17 06:45:18.196138 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:45:18.196149 | orchestrator | Tuesday 17 February 2026 06:44:38 +0000 (0:00:02.200) 0:57:53.941 ****** 2026-02-17 06:45:18.196160 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:45:18.196171 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:45:18.196181 | orchestrator | 2026-02-17 06:45:18.196193 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:45:18.196204 | orchestrator | Tuesday 17 February 2026 06:44:41 +0000 (0:00:02.394) 0:57:56.336 ****** 2026-02-17 06:45:18.196215 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-5 2026-02-17 06:45:18.196227 | orchestrator | 2026-02-17 06:45:18.196238 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:45:18.196249 | orchestrator | Tuesday 17 February 2026 06:44:42 +0000 (0:00:01.248) 0:57:57.584 ****** 2026-02-17 06:45:18.196260 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.196271 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.196282 | orchestrator | 2026-02-17 06:45:18.196295 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:45:18.196308 | orchestrator | Tuesday 17 February 2026 06:44:43 +0000 (0:00:01.289) 0:57:58.874 ****** 2026-02-17 06:45:18.196320 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.196332 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.196345 | orchestrator | 2026-02-17 06:45:18.196357 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:45:18.196369 | orchestrator | Tuesday 17 February 2026 06:44:44 +0000 (0:00:01.245) 0:58:00.120 ****** 2026-02-17 06:45:18.196382 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:45:18.196395 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:45:18.196407 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:45:18.196420 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:45:18.196432 | orchestrator | 2026-02-17 06:45:18.196459 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:45:18.196473 | orchestrator | Tuesday 17 February 2026 06:44:46 +0000 (0:00:02.004) 0:58:02.125 ****** 2026-02-17 06:45:18.196485 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:45:18.196497 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:45:18.196510 | orchestrator | 2026-02-17 06:45:18.196522 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:45:18.196535 | orchestrator | Tuesday 17 February 2026 06:44:48 +0000 (0:00:01.933) 0:58:04.058 ****** 2026-02-17 06:45:18.196547 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.196559 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.196571 | orchestrator | 2026-02-17 06:45:18.196584 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:45:18.196596 | orchestrator | Tuesday 17 February 2026 06:44:50 +0000 (0:00:01.290) 0:58:05.348 ****** 2026-02-17 06:45:18.196617 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.196630 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.196641 | orchestrator | 2026-02-17 06:45:18.196652 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:45:18.196663 | orchestrator | Tuesday 17 February 2026 06:44:51 +0000 (0:00:01.373) 0:58:06.722 ****** 2026-02-17 06:45:18.196674 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.196685 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.196696 | orchestrator | 2026-02-17 06:45:18.196707 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:45:18.196718 | orchestrator | Tuesday 17 February 2026 06:44:52 +0000 (0:00:01.287) 0:58:08.010 ****** 2026-02-17 06:45:18.196729 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-5 2026-02-17 06:45:18.196740 | orchestrator | 2026-02-17 06:45:18.196773 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:45:18.196784 | orchestrator | Tuesday 17 February 2026 06:44:53 +0000 (0:00:01.257) 0:58:09.268 ****** 2026-02-17 06:45:18.196795 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:45:18.196806 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:45:18.196817 | orchestrator | 2026-02-17 06:45:18.196828 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:45:18.196839 | orchestrator | Tuesday 17 February 2026 06:44:55 +0000 (0:00:01.845) 0:58:11.113 ****** 2026-02-17 06:45:18.196850 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:45:18.196878 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:45:18.196890 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:45:18.196900 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.196912 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:45:18.196923 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:45:18.196934 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:45:18.196945 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.196956 | orchestrator | 2026-02-17 06:45:18.196967 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:45:18.196978 | orchestrator | Tuesday 17 February 2026 06:44:57 +0000 (0:00:01.613) 0:58:12.727 ****** 2026-02-17 06:45:18.196989 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197000 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197011 | orchestrator | 2026-02-17 06:45:18.197023 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:45:18.197034 | orchestrator | Tuesday 17 February 2026 06:44:58 +0000 (0:00:01.237) 0:58:13.965 ****** 2026-02-17 06:45:18.197045 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197056 | orchestrator | 2026-02-17 06:45:18.197067 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:45:18.197078 | orchestrator | Tuesday 17 February 2026 06:44:59 +0000 (0:00:01.158) 0:58:15.124 ****** 2026-02-17 06:45:18.197088 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197100 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197111 | orchestrator | 2026-02-17 06:45:18.197122 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:45:18.197133 | orchestrator | Tuesday 17 February 2026 06:45:01 +0000 (0:00:01.266) 0:58:16.390 ****** 2026-02-17 06:45:18.197144 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197155 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197166 | orchestrator | 2026-02-17 06:45:18.197177 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:45:18.197188 | orchestrator | Tuesday 17 February 2026 06:45:02 +0000 (0:00:01.268) 0:58:17.659 ****** 2026-02-17 06:45:18.197206 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197217 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197228 | orchestrator | 2026-02-17 06:45:18.197239 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:45:18.197250 | orchestrator | Tuesday 17 February 2026 06:45:03 +0000 (0:00:01.262) 0:58:18.921 ****** 2026-02-17 06:45:18.197261 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:45:18.197272 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:45:18.197283 | orchestrator | 2026-02-17 06:45:18.197294 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:45:18.197306 | orchestrator | Tuesday 17 February 2026 06:45:06 +0000 (0:00:02.656) 0:58:21.578 ****** 2026-02-17 06:45:18.197317 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:45:18.197328 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:45:18.197338 | orchestrator | 2026-02-17 06:45:18.197349 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:45:18.197360 | orchestrator | Tuesday 17 February 2026 06:45:07 +0000 (0:00:01.580) 0:58:23.159 ****** 2026-02-17 06:45:18.197371 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-5 2026-02-17 06:45:18.197383 | orchestrator | 2026-02-17 06:45:18.197400 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:45:18.197411 | orchestrator | Tuesday 17 February 2026 06:45:09 +0000 (0:00:01.353) 0:58:24.513 ****** 2026-02-17 06:45:18.197422 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197434 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197444 | orchestrator | 2026-02-17 06:45:18.197455 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:45:18.197466 | orchestrator | Tuesday 17 February 2026 06:45:10 +0000 (0:00:01.278) 0:58:25.791 ****** 2026-02-17 06:45:18.197477 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197488 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197499 | orchestrator | 2026-02-17 06:45:18.197510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:45:18.197521 | orchestrator | Tuesday 17 February 2026 06:45:11 +0000 (0:00:01.231) 0:58:27.022 ****** 2026-02-17 06:45:18.197533 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197544 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197555 | orchestrator | 2026-02-17 06:45:18.197566 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:45:18.197577 | orchestrator | Tuesday 17 February 2026 06:45:12 +0000 (0:00:01.196) 0:58:28.219 ****** 2026-02-17 06:45:18.197587 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197598 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197609 | orchestrator | 2026-02-17 06:45:18.197620 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:45:18.197631 | orchestrator | Tuesday 17 February 2026 06:45:14 +0000 (0:00:01.386) 0:58:29.606 ****** 2026-02-17 06:45:18.197642 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197653 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197664 | orchestrator | 2026-02-17 06:45:18.197675 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:45:18.197687 | orchestrator | Tuesday 17 February 2026 06:45:15 +0000 (0:00:01.255) 0:58:30.861 ****** 2026-02-17 06:45:18.197698 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197709 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197720 | orchestrator | 2026-02-17 06:45:18.197731 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:45:18.197742 | orchestrator | Tuesday 17 February 2026 06:45:16 +0000 (0:00:01.260) 0:58:32.122 ****** 2026-02-17 06:45:18.197770 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:45:18.197781 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:45:18.197792 | orchestrator | 2026-02-17 06:45:18.197810 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:46:00.083651 | orchestrator | Tuesday 17 February 2026 06:45:18 +0000 (0:00:01.328) 0:58:33.451 ****** 2026-02-17 06:46:00.083834 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.083855 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.083867 | orchestrator | 2026-02-17 06:46:00.083879 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:46:00.083891 | orchestrator | Tuesday 17 February 2026 06:45:19 +0000 (0:00:01.277) 0:58:34.728 ****** 2026-02-17 06:46:00.083903 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:00.083915 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:00.083926 | orchestrator | 2026-02-17 06:46:00.083938 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:46:00.083950 | orchestrator | Tuesday 17 February 2026 06:45:20 +0000 (0:00:01.295) 0:58:36.024 ****** 2026-02-17 06:46:00.083962 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-5 2026-02-17 06:46:00.083973 | orchestrator | 2026-02-17 06:46:00.083984 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:46:00.083995 | orchestrator | Tuesday 17 February 2026 06:45:22 +0000 (0:00:01.618) 0:58:37.643 ****** 2026-02-17 06:46:00.084007 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-17 06:46:00.084018 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-17 06:46:00.084029 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-17 06:46:00.084040 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-17 06:46:00.084051 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-17 06:46:00.084062 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-17 06:46:00.084073 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-17 06:46:00.084083 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-17 06:46:00.084095 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-17 06:46:00.084105 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-17 06:46:00.084116 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-17 06:46:00.084127 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-17 06:46:00.084138 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-17 06:46:00.084149 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-17 06:46:00.084160 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:46:00.084173 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:46:00.084186 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:46:00.084202 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:46:00.084223 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:46:00.084250 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:46:00.084276 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:46:00.084294 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:46:00.084313 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:46:00.084332 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:46:00.084374 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:46:00.084393 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:46:00.084410 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:46:00.084428 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:46:00.084447 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-17 06:46:00.084466 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-17 06:46:00.084515 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-17 06:46:00.084535 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-17 06:46:00.084554 | orchestrator | 2026-02-17 06:46:00.084572 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:46:00.084591 | orchestrator | Tuesday 17 February 2026 06:45:29 +0000 (0:00:06.847) 0:58:44.490 ****** 2026-02-17 06:46:00.084612 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-5 2026-02-17 06:46:00.084630 | orchestrator | 2026-02-17 06:46:00.084648 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 06:46:00.084664 | orchestrator | Tuesday 17 February 2026 06:45:30 +0000 (0:00:01.472) 0:58:45.962 ****** 2026-02-17 06:46:00.084684 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:46:00.084731 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:46:00.084752 | orchestrator | 2026-02-17 06:46:00.084768 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 06:46:00.084785 | orchestrator | Tuesday 17 February 2026 06:45:32 +0000 (0:00:01.662) 0:58:47.625 ****** 2026-02-17 06:46:00.084803 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:46:00.084822 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:46:00.084841 | orchestrator | 2026-02-17 06:46:00.084859 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:46:00.084900 | orchestrator | Tuesday 17 February 2026 06:45:34 +0000 (0:00:02.152) 0:58:49.777 ****** 2026-02-17 06:46:00.084913 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.084924 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.084935 | orchestrator | 2026-02-17 06:46:00.084946 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:46:00.084957 | orchestrator | Tuesday 17 February 2026 06:45:35 +0000 (0:00:01.385) 0:58:51.163 ****** 2026-02-17 06:46:00.084982 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.084993 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085015 | orchestrator | 2026-02-17 06:46:00.085026 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:46:00.085037 | orchestrator | Tuesday 17 February 2026 06:45:37 +0000 (0:00:01.291) 0:58:52.454 ****** 2026-02-17 06:46:00.085048 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085059 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085070 | orchestrator | 2026-02-17 06:46:00.085081 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:46:00.085092 | orchestrator | Tuesday 17 February 2026 06:45:38 +0000 (0:00:01.364) 0:58:53.818 ****** 2026-02-17 06:46:00.085103 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085114 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085125 | orchestrator | 2026-02-17 06:46:00.085135 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:46:00.085146 | orchestrator | Tuesday 17 February 2026 06:45:39 +0000 (0:00:01.219) 0:58:55.038 ****** 2026-02-17 06:46:00.085157 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085168 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085179 | orchestrator | 2026-02-17 06:46:00.085190 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:46:00.085201 | orchestrator | Tuesday 17 February 2026 06:45:41 +0000 (0:00:01.330) 0:58:56.369 ****** 2026-02-17 06:46:00.085212 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085223 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085234 | orchestrator | 2026-02-17 06:46:00.085245 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:46:00.085269 | orchestrator | Tuesday 17 February 2026 06:45:42 +0000 (0:00:01.250) 0:58:57.620 ****** 2026-02-17 06:46:00.085280 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085291 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085301 | orchestrator | 2026-02-17 06:46:00.085312 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:46:00.085323 | orchestrator | Tuesday 17 February 2026 06:45:43 +0000 (0:00:01.349) 0:58:58.969 ****** 2026-02-17 06:46:00.085334 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085345 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085356 | orchestrator | 2026-02-17 06:46:00.085367 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:46:00.085378 | orchestrator | Tuesday 17 February 2026 06:45:44 +0000 (0:00:01.294) 0:59:00.264 ****** 2026-02-17 06:46:00.085389 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085399 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085410 | orchestrator | 2026-02-17 06:46:00.085421 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:46:00.085432 | orchestrator | Tuesday 17 February 2026 06:45:46 +0000 (0:00:01.306) 0:59:01.570 ****** 2026-02-17 06:46:00.085443 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085463 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085474 | orchestrator | 2026-02-17 06:46:00.085485 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:46:00.085496 | orchestrator | Tuesday 17 February 2026 06:45:47 +0000 (0:00:01.284) 0:59:02.854 ****** 2026-02-17 06:46:00.085507 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:00.085518 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:00.085529 | orchestrator | 2026-02-17 06:46:00.085540 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:46:00.085551 | orchestrator | Tuesday 17 February 2026 06:45:48 +0000 (0:00:01.280) 0:59:04.135 ****** 2026-02-17 06:46:00.085562 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:46:00.085572 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:46:00.085583 | orchestrator | 2026-02-17 06:46:00.085594 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:46:00.085605 | orchestrator | Tuesday 17 February 2026 06:45:53 +0000 (0:00:04.536) 0:59:08.672 ****** 2026-02-17 06:46:00.085616 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:46:00.085627 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:46:00.085638 | orchestrator | 2026-02-17 06:46:00.085649 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:46:00.085660 | orchestrator | Tuesday 17 February 2026 06:45:54 +0000 (0:00:01.280) 0:59:09.953 ****** 2026-02-17 06:46:00.085673 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-17 06:46:00.085695 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-17 06:46:48.241204 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-17 06:46:48.241344 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-17 06:46:48.241362 | orchestrator | 2026-02-17 06:46:48.241376 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:46:48.241389 | orchestrator | Tuesday 17 February 2026 06:46:00 +0000 (0:00:05.387) 0:59:15.340 ****** 2026-02-17 06:46:48.241400 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.241412 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:48.241423 | orchestrator | 2026-02-17 06:46:48.241435 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:46:48.241446 | orchestrator | Tuesday 17 February 2026 06:46:01 +0000 (0:00:01.268) 0:59:16.610 ****** 2026-02-17 06:46:48.241457 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.241468 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:48.241479 | orchestrator | 2026-02-17 06:46:48.241490 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:46:48.241502 | orchestrator | Tuesday 17 February 2026 06:46:02 +0000 (0:00:01.262) 0:59:17.872 ****** 2026-02-17 06:46:48.241513 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.241524 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:48.241535 | orchestrator | 2026-02-17 06:46:48.241546 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:46:48.241556 | orchestrator | Tuesday 17 February 2026 06:46:03 +0000 (0:00:01.298) 0:59:19.171 ****** 2026-02-17 06:46:48.241567 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.241578 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:48.241589 | orchestrator | 2026-02-17 06:46:48.241600 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:46:48.241612 | orchestrator | Tuesday 17 February 2026 06:46:05 +0000 (0:00:01.271) 0:59:20.442 ****** 2026-02-17 06:46:48.241623 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.241634 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:48.241644 | orchestrator | 2026-02-17 06:46:48.241655 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:46:48.241701 | orchestrator | Tuesday 17 February 2026 06:46:06 +0000 (0:00:01.293) 0:59:21.736 ****** 2026-02-17 06:46:48.241712 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.241725 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.241737 | orchestrator | 2026-02-17 06:46:48.241765 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:46:48.241777 | orchestrator | Tuesday 17 February 2026 06:46:07 +0000 (0:00:01.413) 0:59:23.149 ****** 2026-02-17 06:46:48.241790 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:46:48.241803 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:46:48.241814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:46:48.241827 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.241839 | orchestrator | 2026-02-17 06:46:48.241852 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:46:48.241864 | orchestrator | Tuesday 17 February 2026 06:46:09 +0000 (0:00:01.452) 0:59:24.602 ****** 2026-02-17 06:46:48.241877 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:46:48.241889 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:46:48.241901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:46:48.241922 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.241934 | orchestrator | 2026-02-17 06:46:48.241947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:46:48.241960 | orchestrator | Tuesday 17 February 2026 06:46:10 +0000 (0:00:01.440) 0:59:26.042 ****** 2026-02-17 06:46:48.241972 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:46:48.241984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:46:48.241996 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:46:48.242008 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.242085 | orchestrator | 2026-02-17 06:46:48.242098 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:46:48.242110 | orchestrator | Tuesday 17 February 2026 06:46:12 +0000 (0:00:01.472) 0:59:27.514 ****** 2026-02-17 06:46:48.242121 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.242131 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.242142 | orchestrator | 2026-02-17 06:46:48.242153 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:46:48.242164 | orchestrator | Tuesday 17 February 2026 06:46:13 +0000 (0:00:01.342) 0:59:28.856 ****** 2026-02-17 06:46:48.242175 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 06:46:48.242186 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 06:46:48.242197 | orchestrator | 2026-02-17 06:46:48.242208 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:46:48.242218 | orchestrator | Tuesday 17 February 2026 06:46:15 +0000 (0:00:01.520) 0:59:30.377 ****** 2026-02-17 06:46:48.242229 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.242240 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.242251 | orchestrator | 2026-02-17 06:46:48.242280 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-17 06:46:48.242292 | orchestrator | Tuesday 17 February 2026 06:46:17 +0000 (0:00:01.995) 0:59:32.372 ****** 2026-02-17 06:46:48.242303 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.242314 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:48.242325 | orchestrator | 2026-02-17 06:46:48.242336 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-17 06:46:48.242347 | orchestrator | Tuesday 17 February 2026 06:46:18 +0000 (0:00:01.257) 0:59:33.630 ****** 2026-02-17 06:46:48.242358 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-5 2026-02-17 06:46:48.242371 | orchestrator | 2026-02-17 06:46:48.242381 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-17 06:46:48.242393 | orchestrator | Tuesday 17 February 2026 06:46:19 +0000 (0:00:01.264) 0:59:34.895 ****** 2026-02-17 06:46:48.242403 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-17 06:46:48.242414 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-17 06:46:48.242425 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-17 06:46:48.242436 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-17 06:46:48.242447 | orchestrator | 2026-02-17 06:46:48.242458 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-17 06:46:48.242469 | orchestrator | Tuesday 17 February 2026 06:46:21 +0000 (0:00:02.017) 0:59:36.912 ****** 2026-02-17 06:46:48.242480 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:46:48.242490 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 06:46:48.242501 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:46:48.242512 | orchestrator | 2026-02-17 06:46:48.242536 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:46:48.242548 | orchestrator | Tuesday 17 February 2026 06:46:24 +0000 (0:00:03.142) 0:59:40.055 ****** 2026-02-17 06:46:48.242569 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-17 06:46:48.242589 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 06:46:48.242600 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.242611 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-17 06:46:48.242622 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-17 06:46:48.242633 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.242644 | orchestrator | 2026-02-17 06:46:48.242655 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-17 06:46:48.242704 | orchestrator | Tuesday 17 February 2026 06:46:26 +0000 (0:00:02.093) 0:59:42.148 ****** 2026-02-17 06:46:48.242715 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.242726 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.242736 | orchestrator | 2026-02-17 06:46:48.242747 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-17 06:46:48.242758 | orchestrator | Tuesday 17 February 2026 06:46:28 +0000 (0:00:01.696) 0:59:43.844 ****** 2026-02-17 06:46:48.242769 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.242780 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:46:48.242791 | orchestrator | 2026-02-17 06:46:48.242808 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-17 06:46:48.242820 | orchestrator | Tuesday 17 February 2026 06:46:29 +0000 (0:00:01.372) 0:59:45.216 ****** 2026-02-17 06:46:48.242831 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-5 2026-02-17 06:46:48.242842 | orchestrator | 2026-02-17 06:46:48.242853 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-17 06:46:48.242864 | orchestrator | Tuesday 17 February 2026 06:46:31 +0000 (0:00:01.352) 0:59:46.569 ****** 2026-02-17 06:46:48.242875 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-5 2026-02-17 06:46:48.242885 | orchestrator | 2026-02-17 06:46:48.242896 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-17 06:46:48.242907 | orchestrator | Tuesday 17 February 2026 06:46:32 +0000 (0:00:01.257) 0:59:47.827 ****** 2026-02-17 06:46:48.242918 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.242929 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.242940 | orchestrator | 2026-02-17 06:46:48.242951 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-17 06:46:48.242962 | orchestrator | Tuesday 17 February 2026 06:46:34 +0000 (0:00:02.257) 0:59:50.084 ****** 2026-02-17 06:46:48.242972 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.242983 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.242994 | orchestrator | 2026-02-17 06:46:48.243005 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-17 06:46:48.243016 | orchestrator | Tuesday 17 February 2026 06:46:37 +0000 (0:00:02.382) 0:59:52.467 ****** 2026-02-17 06:46:48.243026 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.243037 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.243048 | orchestrator | 2026-02-17 06:46:48.243059 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-17 06:46:48.243070 | orchestrator | Tuesday 17 February 2026 06:46:39 +0000 (0:00:02.434) 0:59:54.901 ****** 2026-02-17 06:46:48.243081 | orchestrator | changed: [testbed-node-4] 2026-02-17 06:46:48.243092 | orchestrator | changed: [testbed-node-5] 2026-02-17 06:46:48.243103 | orchestrator | 2026-02-17 06:46:48.243114 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-17 06:46:48.243125 | orchestrator | Tuesday 17 February 2026 06:46:43 +0000 (0:00:03.551) 0:59:58.453 ****** 2026-02-17 06:46:48.243136 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:46:48.243147 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:46:48.243157 | orchestrator | 2026-02-17 06:46:48.243168 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-17 06:46:48.243179 | orchestrator | Tuesday 17 February 2026 06:46:44 +0000 (0:00:01.709) 1:00:00.163 ****** 2026-02-17 06:46:48.243190 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:46:48.243207 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:47:12.332071 | orchestrator | 2026-02-17 06:47:12.332184 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-17 06:47:12.332201 | orchestrator | 2026-02-17 06:47:12.332213 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:47:12.332225 | orchestrator | Tuesday 17 February 2026 06:46:48 +0000 (0:00:03.328) 1:00:03.491 ****** 2026-02-17 06:47:12.332236 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-17 06:47:12.332247 | orchestrator | 2026-02-17 06:47:12.332258 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:47:12.332269 | orchestrator | Tuesday 17 February 2026 06:46:49 +0000 (0:00:01.310) 1:00:04.802 ****** 2026-02-17 06:47:12.332280 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:12.332292 | orchestrator | 2026-02-17 06:47:12.332303 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:47:12.332314 | orchestrator | Tuesday 17 February 2026 06:46:50 +0000 (0:00:01.452) 1:00:06.255 ****** 2026-02-17 06:47:12.332325 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:12.332336 | orchestrator | 2026-02-17 06:47:12.332347 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:47:12.332358 | orchestrator | Tuesday 17 February 2026 06:46:52 +0000 (0:00:01.176) 1:00:07.431 ****** 2026-02-17 06:47:12.332369 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:12.332380 | orchestrator | 2026-02-17 06:47:12.332391 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:47:12.332402 | orchestrator | Tuesday 17 February 2026 06:46:53 +0000 (0:00:01.469) 1:00:08.901 ****** 2026-02-17 06:47:12.332413 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:12.332424 | orchestrator | 2026-02-17 06:47:12.332435 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:47:12.332446 | orchestrator | Tuesday 17 February 2026 06:46:54 +0000 (0:00:01.157) 1:00:10.058 ****** 2026-02-17 06:47:12.332457 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:12.332467 | orchestrator | 2026-02-17 06:47:12.332478 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:47:12.332489 | orchestrator | Tuesday 17 February 2026 06:46:55 +0000 (0:00:01.147) 1:00:11.206 ****** 2026-02-17 06:47:12.332500 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:12.332511 | orchestrator | 2026-02-17 06:47:12.332522 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:47:12.332534 | orchestrator | Tuesday 17 February 2026 06:46:57 +0000 (0:00:01.169) 1:00:12.375 ****** 2026-02-17 06:47:12.332544 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:12.332556 | orchestrator | 2026-02-17 06:47:12.332567 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:47:12.332577 | orchestrator | Tuesday 17 February 2026 06:46:58 +0000 (0:00:01.189) 1:00:13.564 ****** 2026-02-17 06:47:12.332588 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:12.332599 | orchestrator | 2026-02-17 06:47:12.332610 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:47:12.332621 | orchestrator | Tuesday 17 February 2026 06:46:59 +0000 (0:00:01.144) 1:00:14.709 ****** 2026-02-17 06:47:12.332632 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:47:12.332735 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:47:12.332748 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:47:12.332759 | orchestrator | 2026-02-17 06:47:12.332770 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:47:12.332781 | orchestrator | Tuesday 17 February 2026 06:47:01 +0000 (0:00:02.058) 1:00:16.767 ****** 2026-02-17 06:47:12.332792 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:12.332803 | orchestrator | 2026-02-17 06:47:12.332814 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:47:12.332848 | orchestrator | Tuesday 17 February 2026 06:47:02 +0000 (0:00:01.271) 1:00:18.038 ****** 2026-02-17 06:47:12.332859 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:47:12.332870 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:47:12.332881 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:47:12.332892 | orchestrator | 2026-02-17 06:47:12.332902 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:47:12.332913 | orchestrator | Tuesday 17 February 2026 06:47:05 +0000 (0:00:03.193) 1:00:21.232 ****** 2026-02-17 06:47:12.332924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 06:47:12.332935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 06:47:12.332946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 06:47:12.332957 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:12.332968 | orchestrator | 2026-02-17 06:47:12.332979 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:47:12.332990 | orchestrator | Tuesday 17 February 2026 06:47:07 +0000 (0:00:01.888) 1:00:23.121 ****** 2026-02-17 06:47:12.333003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:47:12.333017 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:47:12.333046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:47:12.333058 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:12.333070 | orchestrator | 2026-02-17 06:47:12.333080 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:47:12.333091 | orchestrator | Tuesday 17 February 2026 06:47:09 +0000 (0:00:02.086) 1:00:25.207 ****** 2026-02-17 06:47:12.333105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:12.333119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:12.333130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:12.333141 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:12.333152 | orchestrator | 2026-02-17 06:47:12.333163 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:47:12.333175 | orchestrator | Tuesday 17 February 2026 06:47:11 +0000 (0:00:01.183) 1:00:26.391 ****** 2026-02-17 06:47:12.333214 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:47:03.279468', 'end': '2026-02-17 06:47:03.338996', 'delta': '0:00:00.059528', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:47:12.333239 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:47:04.220819', 'end': '2026-02-17 06:47:04.265767', 'delta': '0:00:00.044948', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:47:12.333257 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:47:04.772126', 'end': '2026-02-17 06:47:04.814605', 'delta': '0:00:00.042479', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:47:12.333276 | orchestrator | 2026-02-17 06:47:12.333302 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:47:29.984602 | orchestrator | Tuesday 17 February 2026 06:47:12 +0000 (0:00:01.198) 1:00:27.589 ****** 2026-02-17 06:47:29.984817 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:29.984835 | orchestrator | 2026-02-17 06:47:29.984849 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:47:29.984861 | orchestrator | Tuesday 17 February 2026 06:47:13 +0000 (0:00:01.277) 1:00:28.867 ****** 2026-02-17 06:47:29.984873 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:29.984886 | orchestrator | 2026-02-17 06:47:29.984898 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:47:29.984910 | orchestrator | Tuesday 17 February 2026 06:47:14 +0000 (0:00:01.257) 1:00:30.125 ****** 2026-02-17 06:47:29.984921 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:29.984932 | orchestrator | 2026-02-17 06:47:29.984944 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:47:29.984955 | orchestrator | Tuesday 17 February 2026 06:47:16 +0000 (0:00:01.166) 1:00:31.292 ****** 2026-02-17 06:47:29.984966 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:47:29.984978 | orchestrator | 2026-02-17 06:47:29.984990 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:47:29.985001 | orchestrator | Tuesday 17 February 2026 06:47:17 +0000 (0:00:01.943) 1:00:33.235 ****** 2026-02-17 06:47:29.985012 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:29.985023 | orchestrator | 2026-02-17 06:47:29.985034 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:47:29.985070 | orchestrator | Tuesday 17 February 2026 06:47:19 +0000 (0:00:01.147) 1:00:34.383 ****** 2026-02-17 06:47:29.985082 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:29.985093 | orchestrator | 2026-02-17 06:47:29.985104 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:47:29.985116 | orchestrator | Tuesday 17 February 2026 06:47:20 +0000 (0:00:01.155) 1:00:35.539 ****** 2026-02-17 06:47:29.985129 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:29.985142 | orchestrator | 2026-02-17 06:47:29.985155 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:47:29.985167 | orchestrator | Tuesday 17 February 2026 06:47:21 +0000 (0:00:01.293) 1:00:36.832 ****** 2026-02-17 06:47:29.985179 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:29.985191 | orchestrator | 2026-02-17 06:47:29.985204 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:47:29.985217 | orchestrator | Tuesday 17 February 2026 06:47:22 +0000 (0:00:01.168) 1:00:38.000 ****** 2026-02-17 06:47:29.985230 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:29.985242 | orchestrator | 2026-02-17 06:47:29.985255 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:47:29.985267 | orchestrator | Tuesday 17 February 2026 06:47:23 +0000 (0:00:01.139) 1:00:39.139 ****** 2026-02-17 06:47:29.985280 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:29.985292 | orchestrator | 2026-02-17 06:47:29.985305 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:47:29.985317 | orchestrator | Tuesday 17 February 2026 06:47:25 +0000 (0:00:01.159) 1:00:40.299 ****** 2026-02-17 06:47:29.985330 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:29.985342 | orchestrator | 2026-02-17 06:47:29.985372 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:47:29.985385 | orchestrator | Tuesday 17 February 2026 06:47:26 +0000 (0:00:01.222) 1:00:41.521 ****** 2026-02-17 06:47:29.985397 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:29.985411 | orchestrator | 2026-02-17 06:47:29.985423 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:47:29.985436 | orchestrator | Tuesday 17 February 2026 06:47:27 +0000 (0:00:01.183) 1:00:42.705 ****** 2026-02-17 06:47:29.985448 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:29.985461 | orchestrator | 2026-02-17 06:47:29.985473 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:47:29.985486 | orchestrator | Tuesday 17 February 2026 06:47:28 +0000 (0:00:01.129) 1:00:43.835 ****** 2026-02-17 06:47:29.985497 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:47:29.985508 | orchestrator | 2026-02-17 06:47:29.985519 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:47:29.985530 | orchestrator | Tuesday 17 February 2026 06:47:29 +0000 (0:00:01.177) 1:00:45.013 ****** 2026-02-17 06:47:29.985544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:47:29.985562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'uuids': ['b2ca6990-5b39-46e1-9ab9-fa89aec205ee'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP']}})  2026-02-17 06:47:29.985608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce83e4f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:47:29.985643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f']}})  2026-02-17 06:47:29.985657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:47:29.985675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:47:29.985688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:47:29.985700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:47:29.985711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac', 'dm-uuid-CRYPT-LUKS2-edb3e2e5a632414f8a4f0db6f2dd266c-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:47:29.985740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:47:31.358408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'uuids': ['edb3e2e5-a632-414f-8a4f-0db6f2dd266c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac']}})  2026-02-17 06:47:31.358509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3']}})  2026-02-17 06:47:31.358525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:47:31.358574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d567a40', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:47:31.358690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:47:31.358705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:47:31.358717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP', 'dm-uuid-CRYPT-LUKS2-b2ca69905b3946e19ab9fa89aec205ee-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:47:31.358729 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:47:31.358741 | orchestrator | 2026-02-17 06:47:31.358751 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:47:31.358762 | orchestrator | Tuesday 17 February 2026 06:47:31 +0000 (0:00:01.388) 1:00:46.401 ****** 2026-02-17 06:47:31.358779 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:31.358790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3', 'dm-uuid-LVM-7deHw4lWkyfCkecADNn6zBkV4qXR2vQFXx6FOQOcUiFEqIX5dZe6e9bd1X8vprEP'], 'uuids': ['b2ca6990-5b39-46e1-9ab9-fa89aec205ee'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:31.358801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3', 'scsi-SQEMU_QEMU_HARDDISK_ce83e4f2-c585-44a6-bfcd-a8cbb0540fa3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce83e4f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:31.358828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-E3Eucn-drop-pwn4-1HBG-8XG2-sNAo-468qxz', 'scsi-0QEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427', 'scsi-SQEMU_QEMU_HARDDISK_fe38296d-c093-48ca-96c0-8f602ad79427'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac', 'dm-uuid-CRYPT-LUKS2-edb3e2e5a632414f8a4f0db6f2dd266c-y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--366ad200--d272--50e2--9bbd--3174591b235f-osd--block--366ad200--d272--50e2--9bbd--3174591b235f', 'dm-uuid-LVM-IIzQD1d2im6hDDg8oMI63eUgqrArOr02y3sgMv8r0PZe8WYxMQ1PyRXDCwe04fac'], 'uuids': ['edb3e2e5-a632-414f-8a4f-0db6f2dd266c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe38296d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y3sgMv-8r0P-Ze8W-YxMQ-1PyR-XDCw-e04fac']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qNHkLt-Ozek-Mq1u-BnDJ-EwdT-y4d1-cuYCod', 'scsi-0QEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350', 'scsi-SQEMU_QEMU_HARDDISK_5f284eb4-05bb-45c0-8f93-4c0e151e7350'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5f284eb4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3-osd--block--c478ad6b--fe8a--5fdf--805d--21e03f23f5d3']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:47:32.531567 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d567a40', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d567a40-efe3-40c8-a008-8295f8dd6e25-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:48:01.907375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:48:01.907487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:48:01.907505 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP', 'dm-uuid-CRYPT-LUKS2-b2ca69905b3946e19ab9fa89aec205ee-Xx6FOQ-OcUi-FEqI-X5dZ-e6e9-bd1X-8vprEP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:48:01.907540 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.907555 | orchestrator | 2026-02-17 06:48:01.907568 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:48:01.907580 | orchestrator | Tuesday 17 February 2026 06:47:32 +0000 (0:00:01.393) 1:00:47.794 ****** 2026-02-17 06:48:01.907592 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:01.907642 | orchestrator | 2026-02-17 06:48:01.907654 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:48:01.907665 | orchestrator | Tuesday 17 February 2026 06:47:34 +0000 (0:00:01.554) 1:00:49.349 ****** 2026-02-17 06:48:01.907676 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:01.907687 | orchestrator | 2026-02-17 06:48:01.907698 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:48:01.907709 | orchestrator | Tuesday 17 February 2026 06:47:35 +0000 (0:00:01.136) 1:00:50.485 ****** 2026-02-17 06:48:01.907720 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:01.907731 | orchestrator | 2026-02-17 06:48:01.907743 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:48:01.907754 | orchestrator | Tuesday 17 February 2026 06:47:36 +0000 (0:00:01.445) 1:00:51.930 ****** 2026-02-17 06:48:01.907765 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.907776 | orchestrator | 2026-02-17 06:48:01.907787 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:48:01.907798 | orchestrator | Tuesday 17 February 2026 06:47:37 +0000 (0:00:01.233) 1:00:53.163 ****** 2026-02-17 06:48:01.907809 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.907820 | orchestrator | 2026-02-17 06:48:01.907831 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:48:01.907842 | orchestrator | Tuesday 17 February 2026 06:47:39 +0000 (0:00:01.283) 1:00:54.447 ****** 2026-02-17 06:48:01.907853 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.907865 | orchestrator | 2026-02-17 06:48:01.907876 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:48:01.907887 | orchestrator | Tuesday 17 February 2026 06:47:40 +0000 (0:00:01.189) 1:00:55.637 ****** 2026-02-17 06:48:01.907900 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-17 06:48:01.907913 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-17 06:48:01.907925 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-17 06:48:01.907938 | orchestrator | 2026-02-17 06:48:01.907951 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:48:01.907964 | orchestrator | Tuesday 17 February 2026 06:47:42 +0000 (0:00:02.073) 1:00:57.711 ****** 2026-02-17 06:48:01.907976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-17 06:48:01.907990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-17 06:48:01.908003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-17 06:48:01.908015 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.908028 | orchestrator | 2026-02-17 06:48:01.908040 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:48:01.908054 | orchestrator | Tuesday 17 February 2026 06:47:43 +0000 (0:00:01.284) 1:00:58.996 ****** 2026-02-17 06:48:01.908085 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-17 06:48:01.908107 | orchestrator | 2026-02-17 06:48:01.908119 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:48:01.908131 | orchestrator | Tuesday 17 February 2026 06:47:44 +0000 (0:00:01.123) 1:01:00.120 ****** 2026-02-17 06:48:01.908142 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.908153 | orchestrator | 2026-02-17 06:48:01.908164 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:48:01.908175 | orchestrator | Tuesday 17 February 2026 06:47:45 +0000 (0:00:01.124) 1:01:01.245 ****** 2026-02-17 06:48:01.908186 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.908197 | orchestrator | 2026-02-17 06:48:01.908214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:48:01.908241 | orchestrator | Tuesday 17 February 2026 06:47:47 +0000 (0:00:01.152) 1:01:02.397 ****** 2026-02-17 06:48:01.908253 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.908264 | orchestrator | 2026-02-17 06:48:01.908284 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:48:01.908296 | orchestrator | Tuesday 17 February 2026 06:47:48 +0000 (0:00:01.172) 1:01:03.570 ****** 2026-02-17 06:48:01.908307 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:01.908318 | orchestrator | 2026-02-17 06:48:01.908329 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:48:01.908340 | orchestrator | Tuesday 17 February 2026 06:47:49 +0000 (0:00:01.292) 1:01:04.863 ****** 2026-02-17 06:48:01.908351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:48:01.908362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:48:01.908373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:48:01.908384 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.908395 | orchestrator | 2026-02-17 06:48:01.908406 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:48:01.908418 | orchestrator | Tuesday 17 February 2026 06:47:51 +0000 (0:00:01.493) 1:01:06.356 ****** 2026-02-17 06:48:01.908428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:48:01.908439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:48:01.908450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:48:01.908461 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.908472 | orchestrator | 2026-02-17 06:48:01.908483 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:48:01.908494 | orchestrator | Tuesday 17 February 2026 06:47:52 +0000 (0:00:01.554) 1:01:07.911 ****** 2026-02-17 06:48:01.908505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:48:01.908516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:48:01.908538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:48:01.908550 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:01.908572 | orchestrator | 2026-02-17 06:48:01.908583 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:48:01.908612 | orchestrator | Tuesday 17 February 2026 06:47:54 +0000 (0:00:01.477) 1:01:09.388 ****** 2026-02-17 06:48:01.908624 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:01.908636 | orchestrator | 2026-02-17 06:48:01.908646 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:48:01.908658 | orchestrator | Tuesday 17 February 2026 06:47:55 +0000 (0:00:01.162) 1:01:10.550 ****** 2026-02-17 06:48:01.908669 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 06:48:01.908680 | orchestrator | 2026-02-17 06:48:01.908691 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:48:01.908702 | orchestrator | Tuesday 17 February 2026 06:47:56 +0000 (0:00:01.315) 1:01:11.866 ****** 2026-02-17 06:48:01.908713 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:48:01.908731 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:48:01.908742 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:48:01.908753 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 06:48:01.908764 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:48:01.908775 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:48:01.908786 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:48:01.908797 | orchestrator | 2026-02-17 06:48:01.908809 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:48:01.908820 | orchestrator | Tuesday 17 February 2026 06:47:58 +0000 (0:00:02.142) 1:01:14.008 ****** 2026-02-17 06:48:01.908831 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:48:01.908842 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:48:01.908853 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:48:01.908864 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-17 06:48:01.908875 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:48:01.908886 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:48:01.908898 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:48:01.908909 | orchestrator | 2026-02-17 06:48:01.908928 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-17 06:48:55.691836 | orchestrator | Tuesday 17 February 2026 06:48:01 +0000 (0:00:03.151) 1:01:17.159 ****** 2026-02-17 06:48:55.691980 | orchestrator | changed: [testbed-node-3] 2026-02-17 06:48:55.692008 | orchestrator | 2026-02-17 06:48:55.692029 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-17 06:48:55.692049 | orchestrator | Tuesday 17 February 2026 06:48:04 +0000 (0:00:02.246) 1:01:19.406 ****** 2026-02-17 06:48:55.692069 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:48:55.692089 | orchestrator | 2026-02-17 06:48:55.692110 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-17 06:48:55.692148 | orchestrator | Tuesday 17 February 2026 06:48:06 +0000 (0:00:02.763) 1:01:22.169 ****** 2026-02-17 06:48:55.692168 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:48:55.692187 | orchestrator | 2026-02-17 06:48:55.692206 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:48:55.692225 | orchestrator | Tuesday 17 February 2026 06:48:09 +0000 (0:00:02.276) 1:01:24.446 ****** 2026-02-17 06:48:55.692244 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-17 06:48:55.692263 | orchestrator | 2026-02-17 06:48:55.692282 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:48:55.692300 | orchestrator | Tuesday 17 February 2026 06:48:10 +0000 (0:00:01.265) 1:01:25.711 ****** 2026-02-17 06:48:55.692318 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-17 06:48:55.692338 | orchestrator | 2026-02-17 06:48:55.692358 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:48:55.692379 | orchestrator | Tuesday 17 February 2026 06:48:11 +0000 (0:00:01.184) 1:01:26.895 ****** 2026-02-17 06:48:55.692399 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.692418 | orchestrator | 2026-02-17 06:48:55.692437 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:48:55.692487 | orchestrator | Tuesday 17 February 2026 06:48:12 +0000 (0:00:01.151) 1:01:28.046 ****** 2026-02-17 06:48:55.692505 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.692524 | orchestrator | 2026-02-17 06:48:55.692542 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:48:55.692591 | orchestrator | Tuesday 17 February 2026 06:48:14 +0000 (0:00:01.573) 1:01:29.619 ****** 2026-02-17 06:48:55.692610 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.692629 | orchestrator | 2026-02-17 06:48:55.692650 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:48:55.692667 | orchestrator | Tuesday 17 February 2026 06:48:15 +0000 (0:00:01.607) 1:01:31.227 ****** 2026-02-17 06:48:55.692685 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.692703 | orchestrator | 2026-02-17 06:48:55.692721 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:48:55.692740 | orchestrator | Tuesday 17 February 2026 06:48:17 +0000 (0:00:01.558) 1:01:32.786 ****** 2026-02-17 06:48:55.692757 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.692775 | orchestrator | 2026-02-17 06:48:55.692794 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:48:55.692812 | orchestrator | Tuesday 17 February 2026 06:48:18 +0000 (0:00:01.191) 1:01:33.978 ****** 2026-02-17 06:48:55.692830 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.692848 | orchestrator | 2026-02-17 06:48:55.692866 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:48:55.692883 | orchestrator | Tuesday 17 February 2026 06:48:19 +0000 (0:00:01.137) 1:01:35.116 ****** 2026-02-17 06:48:55.692901 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.692918 | orchestrator | 2026-02-17 06:48:55.692936 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:48:55.692954 | orchestrator | Tuesday 17 February 2026 06:48:20 +0000 (0:00:01.134) 1:01:36.250 ****** 2026-02-17 06:48:55.692971 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.692989 | orchestrator | 2026-02-17 06:48:55.693006 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:48:55.693024 | orchestrator | Tuesday 17 February 2026 06:48:22 +0000 (0:00:01.580) 1:01:37.831 ****** 2026-02-17 06:48:55.693042 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.693061 | orchestrator | 2026-02-17 06:48:55.693079 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:48:55.693096 | orchestrator | Tuesday 17 February 2026 06:48:24 +0000 (0:00:01.558) 1:01:39.390 ****** 2026-02-17 06:48:55.693115 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.693133 | orchestrator | 2026-02-17 06:48:55.693151 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:48:55.693169 | orchestrator | Tuesday 17 February 2026 06:48:25 +0000 (0:00:01.170) 1:01:40.560 ****** 2026-02-17 06:48:55.693186 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.693205 | orchestrator | 2026-02-17 06:48:55.693224 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:48:55.693243 | orchestrator | Tuesday 17 February 2026 06:48:26 +0000 (0:00:01.125) 1:01:41.686 ****** 2026-02-17 06:48:55.693260 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.693279 | orchestrator | 2026-02-17 06:48:55.693297 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:48:55.693315 | orchestrator | Tuesday 17 February 2026 06:48:27 +0000 (0:00:01.152) 1:01:42.839 ****** 2026-02-17 06:48:55.693333 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.693352 | orchestrator | 2026-02-17 06:48:55.693371 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:48:55.693390 | orchestrator | Tuesday 17 February 2026 06:48:28 +0000 (0:00:01.169) 1:01:44.008 ****** 2026-02-17 06:48:55.693409 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.693428 | orchestrator | 2026-02-17 06:48:55.693473 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:48:55.693509 | orchestrator | Tuesday 17 February 2026 06:48:29 +0000 (0:00:01.210) 1:01:45.219 ****** 2026-02-17 06:48:55.693529 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.693547 | orchestrator | 2026-02-17 06:48:55.693596 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:48:55.693615 | orchestrator | Tuesday 17 February 2026 06:48:31 +0000 (0:00:01.191) 1:01:46.410 ****** 2026-02-17 06:48:55.693631 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.693649 | orchestrator | 2026-02-17 06:48:55.693666 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:48:55.693683 | orchestrator | Tuesday 17 February 2026 06:48:32 +0000 (0:00:01.131) 1:01:47.542 ****** 2026-02-17 06:48:55.693713 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.693731 | orchestrator | 2026-02-17 06:48:55.693749 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:48:55.693766 | orchestrator | Tuesday 17 February 2026 06:48:33 +0000 (0:00:01.123) 1:01:48.665 ****** 2026-02-17 06:48:55.693785 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.693804 | orchestrator | 2026-02-17 06:48:55.693822 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:48:55.693839 | orchestrator | Tuesday 17 February 2026 06:48:34 +0000 (0:00:01.271) 1:01:49.936 ****** 2026-02-17 06:48:55.693857 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.693874 | orchestrator | 2026-02-17 06:48:55.693892 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:48:55.693909 | orchestrator | Tuesday 17 February 2026 06:48:35 +0000 (0:00:01.181) 1:01:51.118 ****** 2026-02-17 06:48:55.693927 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.693945 | orchestrator | 2026-02-17 06:48:55.693962 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:48:55.693981 | orchestrator | Tuesday 17 February 2026 06:48:36 +0000 (0:00:01.141) 1:01:52.260 ****** 2026-02-17 06:48:55.693998 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694099 | orchestrator | 2026-02-17 06:48:55.694123 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:48:55.694143 | orchestrator | Tuesday 17 February 2026 06:48:38 +0000 (0:00:01.178) 1:01:53.439 ****** 2026-02-17 06:48:55.694161 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694179 | orchestrator | 2026-02-17 06:48:55.694197 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:48:55.694215 | orchestrator | Tuesday 17 February 2026 06:48:39 +0000 (0:00:01.636) 1:01:55.075 ****** 2026-02-17 06:48:55.694233 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694251 | orchestrator | 2026-02-17 06:48:55.694269 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:48:55.694288 | orchestrator | Tuesday 17 February 2026 06:48:40 +0000 (0:00:01.142) 1:01:56.218 ****** 2026-02-17 06:48:55.694307 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694324 | orchestrator | 2026-02-17 06:48:55.694343 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:48:55.694361 | orchestrator | Tuesday 17 February 2026 06:48:42 +0000 (0:00:01.156) 1:01:57.375 ****** 2026-02-17 06:48:55.694380 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694397 | orchestrator | 2026-02-17 06:48:55.694416 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:48:55.694433 | orchestrator | Tuesday 17 February 2026 06:48:43 +0000 (0:00:01.141) 1:01:58.517 ****** 2026-02-17 06:48:55.694451 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694471 | orchestrator | 2026-02-17 06:48:55.694491 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:48:55.694510 | orchestrator | Tuesday 17 February 2026 06:48:44 +0000 (0:00:01.140) 1:01:59.658 ****** 2026-02-17 06:48:55.694528 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694546 | orchestrator | 2026-02-17 06:48:55.694595 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:48:55.694628 | orchestrator | Tuesday 17 February 2026 06:48:45 +0000 (0:00:01.163) 1:02:00.821 ****** 2026-02-17 06:48:55.694648 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694666 | orchestrator | 2026-02-17 06:48:55.694683 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:48:55.694701 | orchestrator | Tuesday 17 February 2026 06:48:46 +0000 (0:00:01.165) 1:02:01.986 ****** 2026-02-17 06:48:55.694719 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694737 | orchestrator | 2026-02-17 06:48:55.694756 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:48:55.694773 | orchestrator | Tuesday 17 February 2026 06:48:47 +0000 (0:00:01.164) 1:02:03.151 ****** 2026-02-17 06:48:55.694792 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694809 | orchestrator | 2026-02-17 06:48:55.694827 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:48:55.694845 | orchestrator | Tuesday 17 February 2026 06:48:49 +0000 (0:00:01.163) 1:02:04.314 ****** 2026-02-17 06:48:55.694863 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:48:55.694880 | orchestrator | 2026-02-17 06:48:55.694897 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:48:55.694915 | orchestrator | Tuesday 17 February 2026 06:48:50 +0000 (0:00:01.283) 1:02:05.598 ****** 2026-02-17 06:48:55.694933 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.694950 | orchestrator | 2026-02-17 06:48:55.694968 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:48:55.694986 | orchestrator | Tuesday 17 February 2026 06:48:52 +0000 (0:00:01.983) 1:02:07.582 ****** 2026-02-17 06:48:55.695004 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:48:55.695022 | orchestrator | 2026-02-17 06:48:55.695038 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:48:55.695057 | orchestrator | Tuesday 17 February 2026 06:48:54 +0000 (0:00:02.198) 1:02:09.780 ****** 2026-02-17 06:48:55.695076 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-17 06:48:55.695094 | orchestrator | 2026-02-17 06:48:55.695113 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:48:55.695148 | orchestrator | Tuesday 17 February 2026 06:48:55 +0000 (0:00:01.167) 1:02:10.948 ****** 2026-02-17 06:49:42.571080 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571191 | orchestrator | 2026-02-17 06:49:42.571208 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:49:42.571221 | orchestrator | Tuesday 17 February 2026 06:48:56 +0000 (0:00:01.177) 1:02:12.126 ****** 2026-02-17 06:49:42.571232 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571243 | orchestrator | 2026-02-17 06:49:42.571255 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:49:42.571266 | orchestrator | Tuesday 17 February 2026 06:48:58 +0000 (0:00:01.173) 1:02:13.300 ****** 2026-02-17 06:49:42.571293 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:49:42.571305 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:49:42.571317 | orchestrator | 2026-02-17 06:49:42.571327 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:49:42.571338 | orchestrator | Tuesday 17 February 2026 06:48:59 +0000 (0:00:01.814) 1:02:15.115 ****** 2026-02-17 06:49:42.571349 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:49:42.571361 | orchestrator | 2026-02-17 06:49:42.571372 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:49:42.571383 | orchestrator | Tuesday 17 February 2026 06:49:01 +0000 (0:00:01.501) 1:02:16.617 ****** 2026-02-17 06:49:42.571394 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571405 | orchestrator | 2026-02-17 06:49:42.571416 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:49:42.571427 | orchestrator | Tuesday 17 February 2026 06:49:02 +0000 (0:00:01.158) 1:02:17.775 ****** 2026-02-17 06:49:42.571462 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571474 | orchestrator | 2026-02-17 06:49:42.571485 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:49:42.571496 | orchestrator | Tuesday 17 February 2026 06:49:03 +0000 (0:00:01.145) 1:02:18.921 ****** 2026-02-17 06:49:42.571506 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571517 | orchestrator | 2026-02-17 06:49:42.571584 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:49:42.571595 | orchestrator | Tuesday 17 February 2026 06:49:04 +0000 (0:00:01.109) 1:02:20.030 ****** 2026-02-17 06:49:42.571608 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-17 06:49:42.571621 | orchestrator | 2026-02-17 06:49:42.571633 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:49:42.571646 | orchestrator | Tuesday 17 February 2026 06:49:06 +0000 (0:00:01.306) 1:02:21.337 ****** 2026-02-17 06:49:42.571658 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:49:42.571670 | orchestrator | 2026-02-17 06:49:42.571683 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:49:42.571695 | orchestrator | Tuesday 17 February 2026 06:49:07 +0000 (0:00:01.666) 1:02:23.003 ****** 2026-02-17 06:49:42.571707 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:49:42.571719 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:49:42.571731 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:49:42.571743 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571756 | orchestrator | 2026-02-17 06:49:42.571768 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:49:42.571780 | orchestrator | Tuesday 17 February 2026 06:49:08 +0000 (0:00:01.186) 1:02:24.190 ****** 2026-02-17 06:49:42.571792 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571804 | orchestrator | 2026-02-17 06:49:42.571816 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:49:42.571828 | orchestrator | Tuesday 17 February 2026 06:49:10 +0000 (0:00:01.093) 1:02:25.283 ****** 2026-02-17 06:49:42.571840 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571852 | orchestrator | 2026-02-17 06:49:42.571864 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:49:42.571877 | orchestrator | Tuesday 17 February 2026 06:49:11 +0000 (0:00:01.185) 1:02:26.469 ****** 2026-02-17 06:49:42.571889 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571902 | orchestrator | 2026-02-17 06:49:42.571914 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:49:42.571926 | orchestrator | Tuesday 17 February 2026 06:49:12 +0000 (0:00:01.143) 1:02:27.613 ****** 2026-02-17 06:49:42.571938 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571950 | orchestrator | 2026-02-17 06:49:42.571963 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:49:42.571973 | orchestrator | Tuesday 17 February 2026 06:49:13 +0000 (0:00:01.192) 1:02:28.805 ****** 2026-02-17 06:49:42.571984 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.571995 | orchestrator | 2026-02-17 06:49:42.572006 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:49:42.572017 | orchestrator | Tuesday 17 February 2026 06:49:14 +0000 (0:00:01.172) 1:02:29.978 ****** 2026-02-17 06:49:42.572027 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:49:42.572038 | orchestrator | 2026-02-17 06:49:42.572049 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:49:42.572060 | orchestrator | Tuesday 17 February 2026 06:49:17 +0000 (0:00:02.405) 1:02:32.384 ****** 2026-02-17 06:49:42.572071 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:49:42.572081 | orchestrator | 2026-02-17 06:49:42.572092 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:49:42.572111 | orchestrator | Tuesday 17 February 2026 06:49:18 +0000 (0:00:01.168) 1:02:33.552 ****** 2026-02-17 06:49:42.572122 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-17 06:49:42.572133 | orchestrator | 2026-02-17 06:49:42.572144 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:49:42.572172 | orchestrator | Tuesday 17 February 2026 06:49:19 +0000 (0:00:01.115) 1:02:34.667 ****** 2026-02-17 06:49:42.572184 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.572195 | orchestrator | 2026-02-17 06:49:42.572206 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:49:42.572217 | orchestrator | Tuesday 17 February 2026 06:49:20 +0000 (0:00:01.274) 1:02:35.941 ****** 2026-02-17 06:49:42.572227 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.572238 | orchestrator | 2026-02-17 06:49:42.572249 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:49:42.572260 | orchestrator | Tuesday 17 February 2026 06:49:21 +0000 (0:00:01.188) 1:02:37.130 ****** 2026-02-17 06:49:42.572276 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.572287 | orchestrator | 2026-02-17 06:49:42.572298 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:49:42.572308 | orchestrator | Tuesday 17 February 2026 06:49:23 +0000 (0:00:01.151) 1:02:38.281 ****** 2026-02-17 06:49:42.572319 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.572330 | orchestrator | 2026-02-17 06:49:42.572341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:49:42.572352 | orchestrator | Tuesday 17 February 2026 06:49:24 +0000 (0:00:01.153) 1:02:39.435 ****** 2026-02-17 06:49:42.572363 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.572373 | orchestrator | 2026-02-17 06:49:42.572384 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:49:42.572395 | orchestrator | Tuesday 17 February 2026 06:49:25 +0000 (0:00:01.159) 1:02:40.595 ****** 2026-02-17 06:49:42.572406 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.572416 | orchestrator | 2026-02-17 06:49:42.572427 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:49:42.572438 | orchestrator | Tuesday 17 February 2026 06:49:26 +0000 (0:00:01.155) 1:02:41.751 ****** 2026-02-17 06:49:42.572449 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.572459 | orchestrator | 2026-02-17 06:49:42.572470 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:49:42.572481 | orchestrator | Tuesday 17 February 2026 06:49:27 +0000 (0:00:01.196) 1:02:42.947 ****** 2026-02-17 06:49:42.572492 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:49:42.572502 | orchestrator | 2026-02-17 06:49:42.572513 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:49:42.572541 | orchestrator | Tuesday 17 February 2026 06:49:28 +0000 (0:00:01.168) 1:02:44.116 ****** 2026-02-17 06:49:42.572553 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:49:42.572563 | orchestrator | 2026-02-17 06:49:42.572574 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:49:42.572585 | orchestrator | Tuesday 17 February 2026 06:49:30 +0000 (0:00:01.199) 1:02:45.316 ****** 2026-02-17 06:49:42.572596 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-17 06:49:42.572607 | orchestrator | 2026-02-17 06:49:42.572618 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:49:42.572629 | orchestrator | Tuesday 17 February 2026 06:49:31 +0000 (0:00:01.146) 1:02:46.463 ****** 2026-02-17 06:49:42.572640 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-17 06:49:42.572651 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-17 06:49:42.572662 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-17 06:49:42.572673 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-17 06:49:42.572691 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-17 06:49:42.572702 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-17 06:49:42.572712 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-17 06:49:42.572723 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:49:42.572734 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:49:42.572745 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:49:42.572756 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:49:42.572767 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:49:42.572778 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:49:42.572788 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:49:42.572799 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-17 06:49:42.572810 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-17 06:49:42.572821 | orchestrator | 2026-02-17 06:49:42.572832 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:49:42.572842 | orchestrator | Tuesday 17 February 2026 06:49:37 +0000 (0:00:06.620) 1:02:53.083 ****** 2026-02-17 06:49:42.572853 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-17 06:49:42.572864 | orchestrator | 2026-02-17 06:49:42.572875 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 06:49:42.572886 | orchestrator | Tuesday 17 February 2026 06:49:39 +0000 (0:00:01.300) 1:02:54.384 ****** 2026-02-17 06:49:42.572897 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:49:42.572909 | orchestrator | 2026-02-17 06:49:42.572920 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 06:49:42.572930 | orchestrator | Tuesday 17 February 2026 06:49:40 +0000 (0:00:01.497) 1:02:55.882 ****** 2026-02-17 06:49:42.572941 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:49:42.572952 | orchestrator | 2026-02-17 06:49:42.572964 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:49:42.572982 | orchestrator | Tuesday 17 February 2026 06:49:42 +0000 (0:00:01.945) 1:02:57.828 ****** 2026-02-17 06:50:33.106391 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106557 | orchestrator | 2026-02-17 06:50:33.106578 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:50:33.106592 | orchestrator | Tuesday 17 February 2026 06:49:43 +0000 (0:00:01.163) 1:02:58.992 ****** 2026-02-17 06:50:33.106603 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106615 | orchestrator | 2026-02-17 06:50:33.106627 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:50:33.106638 | orchestrator | Tuesday 17 February 2026 06:49:44 +0000 (0:00:01.181) 1:03:00.174 ****** 2026-02-17 06:50:33.106667 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106679 | orchestrator | 2026-02-17 06:50:33.106690 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:50:33.106702 | orchestrator | Tuesday 17 February 2026 06:49:46 +0000 (0:00:01.152) 1:03:01.327 ****** 2026-02-17 06:50:33.106713 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106723 | orchestrator | 2026-02-17 06:50:33.106735 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:50:33.106746 | orchestrator | Tuesday 17 February 2026 06:49:47 +0000 (0:00:01.142) 1:03:02.469 ****** 2026-02-17 06:50:33.106757 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106768 | orchestrator | 2026-02-17 06:50:33.106779 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:50:33.106818 | orchestrator | Tuesday 17 February 2026 06:49:48 +0000 (0:00:01.155) 1:03:03.625 ****** 2026-02-17 06:50:33.106829 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106841 | orchestrator | 2026-02-17 06:50:33.106852 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:50:33.106863 | orchestrator | Tuesday 17 February 2026 06:49:49 +0000 (0:00:01.117) 1:03:04.742 ****** 2026-02-17 06:50:33.106874 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106885 | orchestrator | 2026-02-17 06:50:33.106896 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:50:33.106907 | orchestrator | Tuesday 17 February 2026 06:49:50 +0000 (0:00:01.134) 1:03:05.877 ****** 2026-02-17 06:50:33.106920 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106932 | orchestrator | 2026-02-17 06:50:33.106944 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:50:33.106957 | orchestrator | Tuesday 17 February 2026 06:49:51 +0000 (0:00:01.152) 1:03:07.029 ****** 2026-02-17 06:50:33.106969 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.106981 | orchestrator | 2026-02-17 06:50:33.106993 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:50:33.107006 | orchestrator | Tuesday 17 February 2026 06:49:52 +0000 (0:00:01.125) 1:03:08.155 ****** 2026-02-17 06:50:33.107018 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107030 | orchestrator | 2026-02-17 06:50:33.107041 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:50:33.107052 | orchestrator | Tuesday 17 February 2026 06:49:54 +0000 (0:00:01.136) 1:03:09.292 ****** 2026-02-17 06:50:33.107063 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107074 | orchestrator | 2026-02-17 06:50:33.107085 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:50:33.107096 | orchestrator | Tuesday 17 February 2026 06:49:55 +0000 (0:00:01.206) 1:03:10.498 ****** 2026-02-17 06:50:33.107107 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:50:33.107118 | orchestrator | 2026-02-17 06:50:33.107129 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:50:33.107140 | orchestrator | Tuesday 17 February 2026 06:49:59 +0000 (0:00:04.269) 1:03:14.768 ****** 2026-02-17 06:50:33.107151 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:50:33.107163 | orchestrator | 2026-02-17 06:50:33.107175 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:50:33.107185 | orchestrator | Tuesday 17 February 2026 06:50:00 +0000 (0:00:01.242) 1:03:16.011 ****** 2026-02-17 06:50:33.107198 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-17 06:50:33.107213 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-17 06:50:33.107225 | orchestrator | 2026-02-17 06:50:33.107236 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:50:33.107262 | orchestrator | Tuesday 17 February 2026 06:50:05 +0000 (0:00:04.764) 1:03:20.776 ****** 2026-02-17 06:50:33.107274 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107284 | orchestrator | 2026-02-17 06:50:33.107295 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:50:33.107306 | orchestrator | Tuesday 17 February 2026 06:50:06 +0000 (0:00:01.154) 1:03:21.930 ****** 2026-02-17 06:50:33.107325 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107336 | orchestrator | 2026-02-17 06:50:33.107347 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:50:33.107376 | orchestrator | Tuesday 17 February 2026 06:50:07 +0000 (0:00:01.124) 1:03:23.054 ****** 2026-02-17 06:50:33.107387 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107399 | orchestrator | 2026-02-17 06:50:33.107410 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:50:33.107421 | orchestrator | Tuesday 17 February 2026 06:50:08 +0000 (0:00:01.207) 1:03:24.262 ****** 2026-02-17 06:50:33.107432 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107443 | orchestrator | 2026-02-17 06:50:33.107453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:50:33.107470 | orchestrator | Tuesday 17 February 2026 06:50:10 +0000 (0:00:01.187) 1:03:25.450 ****** 2026-02-17 06:50:33.107482 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107513 | orchestrator | 2026-02-17 06:50:33.107524 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:50:33.107535 | orchestrator | Tuesday 17 February 2026 06:50:11 +0000 (0:00:01.183) 1:03:26.633 ****** 2026-02-17 06:50:33.107546 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:50:33.107558 | orchestrator | 2026-02-17 06:50:33.107569 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:50:33.107580 | orchestrator | Tuesday 17 February 2026 06:50:12 +0000 (0:00:01.256) 1:03:27.890 ****** 2026-02-17 06:50:33.107591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:50:33.107602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:50:33.107613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:50:33.107624 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107635 | orchestrator | 2026-02-17 06:50:33.107645 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:50:33.107656 | orchestrator | Tuesday 17 February 2026 06:50:14 +0000 (0:00:01.447) 1:03:29.337 ****** 2026-02-17 06:50:33.107667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:50:33.107678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:50:33.107689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:50:33.107699 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107710 | orchestrator | 2026-02-17 06:50:33.107721 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:50:33.107732 | orchestrator | Tuesday 17 February 2026 06:50:15 +0000 (0:00:01.815) 1:03:31.152 ****** 2026-02-17 06:50:33.107743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-17 06:50:33.107753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-17 06:50:33.107764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-17 06:50:33.107775 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.107785 | orchestrator | 2026-02-17 06:50:33.107796 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:50:33.107807 | orchestrator | Tuesday 17 February 2026 06:50:17 +0000 (0:00:01.836) 1:03:32.989 ****** 2026-02-17 06:50:33.107818 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:50:33.107829 | orchestrator | 2026-02-17 06:50:33.107840 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:50:33.107850 | orchestrator | Tuesday 17 February 2026 06:50:19 +0000 (0:00:01.308) 1:03:34.297 ****** 2026-02-17 06:50:33.107861 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-17 06:50:33.107872 | orchestrator | 2026-02-17 06:50:33.107883 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:50:33.107894 | orchestrator | Tuesday 17 February 2026 06:50:20 +0000 (0:00:01.374) 1:03:35.672 ****** 2026-02-17 06:50:33.107912 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:50:33.107923 | orchestrator | 2026-02-17 06:50:33.107934 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-17 06:50:33.107945 | orchestrator | Tuesday 17 February 2026 06:50:22 +0000 (0:00:01.761) 1:03:37.433 ****** 2026-02-17 06:50:33.107956 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-17 06:50:33.107967 | orchestrator | 2026-02-17 06:50:33.107977 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-17 06:50:33.107988 | orchestrator | Tuesday 17 February 2026 06:50:23 +0000 (0:00:01.528) 1:03:38.962 ****** 2026-02-17 06:50:33.107999 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:50:33.108010 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 06:50:33.108021 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:50:33.108032 | orchestrator | 2026-02-17 06:50:33.108043 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:50:33.108053 | orchestrator | Tuesday 17 February 2026 06:50:26 +0000 (0:00:03.149) 1:03:42.111 ****** 2026-02-17 06:50:33.108064 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-17 06:50:33.108075 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-17 06:50:33.108086 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:50:33.108097 | orchestrator | 2026-02-17 06:50:33.108108 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-17 06:50:33.108119 | orchestrator | Tuesday 17 February 2026 06:50:28 +0000 (0:00:01.994) 1:03:44.105 ****** 2026-02-17 06:50:33.108130 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:50:33.108140 | orchestrator | 2026-02-17 06:50:33.108151 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-17 06:50:33.108162 | orchestrator | Tuesday 17 February 2026 06:50:29 +0000 (0:00:01.132) 1:03:45.238 ****** 2026-02-17 06:50:33.108173 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-17 06:50:33.108184 | orchestrator | 2026-02-17 06:50:33.108195 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-17 06:50:33.108206 | orchestrator | Tuesday 17 February 2026 06:50:31 +0000 (0:00:01.485) 1:03:46.724 ****** 2026-02-17 06:50:33.108224 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:51:46.737790 | orchestrator | 2026-02-17 06:51:46.737960 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-17 06:51:46.737988 | orchestrator | Tuesday 17 February 2026 06:50:33 +0000 (0:00:01.643) 1:03:48.367 ****** 2026-02-17 06:51:46.738009 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:51:46.738147 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-17 06:51:46.738182 | orchestrator | 2026-02-17 06:51:46.738202 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-17 06:51:46.738223 | orchestrator | Tuesday 17 February 2026 06:50:38 +0000 (0:00:05.339) 1:03:53.707 ****** 2026-02-17 06:51:46.738244 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:51:46.738265 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:51:46.738286 | orchestrator | 2026-02-17 06:51:46.738308 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:51:46.738346 | orchestrator | Tuesday 17 February 2026 06:50:41 +0000 (0:00:03.063) 1:03:56.770 ****** 2026-02-17 06:51:46.738365 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-17 06:51:46.738385 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:51:46.738405 | orchestrator | 2026-02-17 06:51:46.738426 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-17 06:51:46.738503 | orchestrator | Tuesday 17 February 2026 06:50:43 +0000 (0:00:02.044) 1:03:58.814 ****** 2026-02-17 06:51:46.738527 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-17 06:51:46.738548 | orchestrator | 2026-02-17 06:51:46.738569 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-17 06:51:46.738591 | orchestrator | Tuesday 17 February 2026 06:50:45 +0000 (0:00:01.513) 1:04:00.328 ****** 2026-02-17 06:51:46.738611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738717 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:51:46.738736 | orchestrator | 2026-02-17 06:51:46.738757 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-17 06:51:46.738779 | orchestrator | Tuesday 17 February 2026 06:50:46 +0000 (0:00:01.684) 1:04:02.013 ****** 2026-02-17 06:51:46.738799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:51:46.738904 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:51:46.738924 | orchestrator | 2026-02-17 06:51:46.738946 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-17 06:51:46.738967 | orchestrator | Tuesday 17 February 2026 06:50:48 +0000 (0:00:01.600) 1:04:03.613 ****** 2026-02-17 06:51:46.738989 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:51:46.739012 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:51:46.739033 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:51:46.739053 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:51:46.739074 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:51:46.739096 | orchestrator | 2026-02-17 06:51:46.739118 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-17 06:51:46.739159 | orchestrator | Tuesday 17 February 2026 06:51:19 +0000 (0:00:30.753) 1:04:34.366 ****** 2026-02-17 06:51:46.739179 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:51:46.739198 | orchestrator | 2026-02-17 06:51:46.739230 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-17 06:51:46.739249 | orchestrator | Tuesday 17 February 2026 06:51:20 +0000 (0:00:01.194) 1:04:35.561 ****** 2026-02-17 06:51:46.739268 | orchestrator | skipping: [testbed-node-3] 2026-02-17 06:51:46.739286 | orchestrator | 2026-02-17 06:51:46.739305 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-17 06:51:46.739334 | orchestrator | Tuesday 17 February 2026 06:51:21 +0000 (0:00:01.172) 1:04:36.734 ****** 2026-02-17 06:51:46.739355 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-17 06:51:46.739373 | orchestrator | 2026-02-17 06:51:46.739391 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-17 06:51:46.739409 | orchestrator | Tuesday 17 February 2026 06:51:22 +0000 (0:00:01.477) 1:04:38.212 ****** 2026-02-17 06:51:46.739429 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-17 06:51:46.739486 | orchestrator | 2026-02-17 06:51:46.739504 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-17 06:51:46.739523 | orchestrator | Tuesday 17 February 2026 06:51:24 +0000 (0:00:01.633) 1:04:39.845 ****** 2026-02-17 06:51:46.739541 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:51:46.739560 | orchestrator | 2026-02-17 06:51:46.739579 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-17 06:51:46.739598 | orchestrator | Tuesday 17 February 2026 06:51:26 +0000 (0:00:02.069) 1:04:41.915 ****** 2026-02-17 06:51:46.739615 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:51:46.739634 | orchestrator | 2026-02-17 06:51:46.739654 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-17 06:51:46.739673 | orchestrator | Tuesday 17 February 2026 06:51:28 +0000 (0:00:01.960) 1:04:43.876 ****** 2026-02-17 06:51:46.739692 | orchestrator | ok: [testbed-node-3] 2026-02-17 06:51:46.739713 | orchestrator | 2026-02-17 06:51:46.739732 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-17 06:51:46.739751 | orchestrator | Tuesday 17 February 2026 06:51:30 +0000 (0:00:02.354) 1:04:46.230 ****** 2026-02-17 06:51:46.739769 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-17 06:51:46.739785 | orchestrator | 2026-02-17 06:51:46.739801 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-17 06:51:46.739818 | orchestrator | 2026-02-17 06:51:46.739834 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:51:46.739851 | orchestrator | Tuesday 17 February 2026 06:51:33 +0000 (0:00:02.811) 1:04:49.042 ****** 2026-02-17 06:51:46.739868 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-17 06:51:46.739885 | orchestrator | 2026-02-17 06:51:46.739901 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:51:46.739917 | orchestrator | Tuesday 17 February 2026 06:51:34 +0000 (0:00:01.119) 1:04:50.161 ****** 2026-02-17 06:51:46.739934 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:51:46.739950 | orchestrator | 2026-02-17 06:51:46.739967 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:51:46.739984 | orchestrator | Tuesday 17 February 2026 06:51:36 +0000 (0:00:01.542) 1:04:51.703 ****** 2026-02-17 06:51:46.740000 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:51:46.740016 | orchestrator | 2026-02-17 06:51:46.740032 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:51:46.740049 | orchestrator | Tuesday 17 February 2026 06:51:37 +0000 (0:00:01.126) 1:04:52.830 ****** 2026-02-17 06:51:46.740065 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:51:46.740082 | orchestrator | 2026-02-17 06:51:46.740099 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:51:46.740116 | orchestrator | Tuesday 17 February 2026 06:51:39 +0000 (0:00:01.547) 1:04:54.377 ****** 2026-02-17 06:51:46.740131 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:51:46.740164 | orchestrator | 2026-02-17 06:51:46.740181 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:51:46.740198 | orchestrator | Tuesday 17 February 2026 06:51:40 +0000 (0:00:01.252) 1:04:55.630 ****** 2026-02-17 06:51:46.740215 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:51:46.740231 | orchestrator | 2026-02-17 06:51:46.740246 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:51:46.740263 | orchestrator | Tuesday 17 February 2026 06:51:41 +0000 (0:00:01.189) 1:04:56.820 ****** 2026-02-17 06:51:46.740280 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:51:46.740297 | orchestrator | 2026-02-17 06:51:46.740314 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:51:46.740330 | orchestrator | Tuesday 17 February 2026 06:51:42 +0000 (0:00:01.168) 1:04:57.988 ****** 2026-02-17 06:51:46.740346 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:51:46.740363 | orchestrator | 2026-02-17 06:51:46.740379 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:51:46.740396 | orchestrator | Tuesday 17 February 2026 06:51:43 +0000 (0:00:01.142) 1:04:59.131 ****** 2026-02-17 06:51:46.740414 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:51:46.740430 | orchestrator | 2026-02-17 06:51:46.740505 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:51:46.740524 | orchestrator | Tuesday 17 February 2026 06:51:45 +0000 (0:00:01.155) 1:05:00.286 ****** 2026-02-17 06:51:46.740540 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:51:46.740556 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:51:46.740572 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:51:46.740589 | orchestrator | 2026-02-17 06:51:46.740605 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:51:46.740632 | orchestrator | Tuesday 17 February 2026 06:51:46 +0000 (0:00:01.704) 1:05:01.991 ****** 2026-02-17 06:52:12.519216 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:12.519328 | orchestrator | 2026-02-17 06:52:12.519343 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:52:12.519356 | orchestrator | Tuesday 17 February 2026 06:51:48 +0000 (0:00:01.335) 1:05:03.327 ****** 2026-02-17 06:52:12.519367 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:52:12.519394 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:52:12.519406 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:52:12.519416 | orchestrator | 2026-02-17 06:52:12.519481 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:52:12.519494 | orchestrator | Tuesday 17 February 2026 06:51:51 +0000 (0:00:03.010) 1:05:06.338 ****** 2026-02-17 06:52:12.519506 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 06:52:12.519517 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 06:52:12.519527 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 06:52:12.519538 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.519550 | orchestrator | 2026-02-17 06:52:12.519561 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:52:12.519571 | orchestrator | Tuesday 17 February 2026 06:51:52 +0000 (0:00:01.570) 1:05:07.909 ****** 2026-02-17 06:52:12.519585 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:52:12.519599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:52:12.519634 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:52:12.519646 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.519657 | orchestrator | 2026-02-17 06:52:12.519668 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:52:12.519678 | orchestrator | Tuesday 17 February 2026 06:51:54 +0000 (0:00:02.086) 1:05:09.995 ****** 2026-02-17 06:52:12.519691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:12.519705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:12.519717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:12.519728 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.519739 | orchestrator | 2026-02-17 06:52:12.519752 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:52:12.519765 | orchestrator | Tuesday 17 February 2026 06:51:55 +0000 (0:00:01.254) 1:05:11.251 ****** 2026-02-17 06:52:12.519799 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:51:48.576210', 'end': '2026-02-17 06:51:48.634706', 'delta': '0:00:00.058496', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:52:12.519822 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:51:49.203468', 'end': '2026-02-17 06:51:49.259000', 'delta': '0:00:00.055532', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:52:12.519835 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:51:49.781547', 'end': '2026-02-17 06:51:49.825846', 'delta': '0:00:00.044299', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:52:12.519856 | orchestrator | 2026-02-17 06:52:12.519869 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:52:12.519881 | orchestrator | Tuesday 17 February 2026 06:51:57 +0000 (0:00:01.263) 1:05:12.514 ****** 2026-02-17 06:52:12.519893 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:12.519906 | orchestrator | 2026-02-17 06:52:12.519919 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:52:12.519931 | orchestrator | Tuesday 17 February 2026 06:51:58 +0000 (0:00:01.292) 1:05:13.806 ****** 2026-02-17 06:52:12.519943 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.519956 | orchestrator | 2026-02-17 06:52:12.519968 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:52:12.519981 | orchestrator | Tuesday 17 February 2026 06:51:59 +0000 (0:00:01.291) 1:05:15.098 ****** 2026-02-17 06:52:12.519993 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:12.520005 | orchestrator | 2026-02-17 06:52:12.520017 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:52:12.520029 | orchestrator | Tuesday 17 February 2026 06:52:01 +0000 (0:00:01.261) 1:05:16.359 ****** 2026-02-17 06:52:12.520042 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:52:12.520054 | orchestrator | 2026-02-17 06:52:12.520066 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:52:12.520079 | orchestrator | Tuesday 17 February 2026 06:52:03 +0000 (0:00:01.985) 1:05:18.345 ****** 2026-02-17 06:52:12.520090 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:12.520101 | orchestrator | 2026-02-17 06:52:12.520112 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:52:12.520123 | orchestrator | Tuesday 17 February 2026 06:52:04 +0000 (0:00:01.125) 1:05:19.470 ****** 2026-02-17 06:52:12.520134 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.520145 | orchestrator | 2026-02-17 06:52:12.520155 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:52:12.520166 | orchestrator | Tuesday 17 February 2026 06:52:05 +0000 (0:00:01.165) 1:05:20.635 ****** 2026-02-17 06:52:12.520177 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.520188 | orchestrator | 2026-02-17 06:52:12.520199 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:52:12.520210 | orchestrator | Tuesday 17 February 2026 06:52:06 +0000 (0:00:01.261) 1:05:21.896 ****** 2026-02-17 06:52:12.520220 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.520231 | orchestrator | 2026-02-17 06:52:12.520242 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:52:12.520253 | orchestrator | Tuesday 17 February 2026 06:52:07 +0000 (0:00:01.132) 1:05:23.029 ****** 2026-02-17 06:52:12.520264 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.520275 | orchestrator | 2026-02-17 06:52:12.520286 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:52:12.520297 | orchestrator | Tuesday 17 February 2026 06:52:08 +0000 (0:00:01.189) 1:05:24.218 ****** 2026-02-17 06:52:12.520308 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:12.520319 | orchestrator | 2026-02-17 06:52:12.520329 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:52:12.520340 | orchestrator | Tuesday 17 February 2026 06:52:10 +0000 (0:00:01.176) 1:05:25.395 ****** 2026-02-17 06:52:12.520351 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:12.520369 | orchestrator | 2026-02-17 06:52:12.520380 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:52:12.520391 | orchestrator | Tuesday 17 February 2026 06:52:11 +0000 (0:00:01.183) 1:05:26.579 ****** 2026-02-17 06:52:12.520401 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:12.520412 | orchestrator | 2026-02-17 06:52:12.520454 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:52:12.520481 | orchestrator | Tuesday 17 February 2026 06:52:12 +0000 (0:00:01.195) 1:05:27.774 ****** 2026-02-17 06:52:15.079023 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:15.079115 | orchestrator | 2026-02-17 06:52:15.079124 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:52:15.079130 | orchestrator | Tuesday 17 February 2026 06:52:13 +0000 (0:00:01.166) 1:05:28.940 ****** 2026-02-17 06:52:15.079136 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:15.079143 | orchestrator | 2026-02-17 06:52:15.079160 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:52:15.079165 | orchestrator | Tuesday 17 February 2026 06:52:14 +0000 (0:00:01.155) 1:05:30.096 ****** 2026-02-17 06:52:15.079172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:52:15.079181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'uuids': ['dab48e76-bd26-40e2-b056-8f58a903c67b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL']}})  2026-02-17 06:52:15.079190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd9c05b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:52:15.079196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b']}})  2026-02-17 06:52:15.079202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:52:15.079221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:52:15.079245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:52:15.079255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:52:15.079262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08', 'dm-uuid-CRYPT-LUKS2-40a19dfb08344771a8e6cfe7009b1e1d-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:52:15.079270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:52:15.079278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'uuids': ['40a19dfb-0834-4771-a8e6-cfe7009b1e1d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08']}})  2026-02-17 06:52:15.079286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0']}})  2026-02-17 06:52:15.079299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:52:15.079322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '95350bd6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:52:16.424720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:52:16.424839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:52:16.424858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL', 'dm-uuid-CRYPT-LUKS2-dab48e76bd2640e2b0568f58a903c67b-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:52:16.424896 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:16.424953 | orchestrator | 2026-02-17 06:52:16.424967 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:52:16.424979 | orchestrator | Tuesday 17 February 2026 06:52:16 +0000 (0:00:01.357) 1:05:31.454 ****** 2026-02-17 06:52:16.424992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:16.425020 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0', 'dm-uuid-LVM-1090XD0OQTXAUZ8Wi2itjP3x0pRPhKdJ71eR21JxQlgIFLFoMTECyYLYHcwxnfxL'], 'uuids': ['dab48e76-bd26-40e2-b056-8f58a903c67b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:16.425034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416', 'scsi-SQEMU_QEMU_HARDDISK_fd9c05b9-f9ca-4e15-8356-6060fba46416'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd9c05b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:16.425065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Q1xf2-RGpc-wX5q-Dyrb-JYWs-YxxT-Ex0yzM', 'scsi-0QEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856', 'scsi-SQEMU_QEMU_HARDDISK_f250a0b0-2ca1-4b6e-93a1-cfc431f0e856'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:16.425089 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:16.425102 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:16.425119 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:16.425132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:16.425151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08', 'dm-uuid-CRYPT-LUKS2-40a19dfb08344771a8e6cfe7009b1e1d-mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.821788 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.821954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b-osd--block--33b7cf65--698e--5092--b1e1--7b58bfaeaf8b', 'dm-uuid-LVM-w2PNfUKThVSg1H9faDUMB8g6Z1jBYkY5mXvk0wLk6F5eMbZwtsfba3i1pVrW6O08'], 'uuids': ['40a19dfb-0834-4771-a8e6-cfe7009b1e1d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f250a0b0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mXvk0w-Lk6F-5eMb-Zwts-fba3-i1pV-rW6O08']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.821993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-3QMQw3-wrUd-kJux-0pE0-HZxP-2qKa-sF9TSf', 'scsi-0QEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67', 'scsi-SQEMU_QEMU_HARDDISK_16391a47-5928-45dd-a24a-c21b57e88b67'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16391a47', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8aff4da6--f81a--563d--a807--caa30e1cb6b0-osd--block--8aff4da6--f81a--563d--a807--caa30e1cb6b0']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.822010 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.822107 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '95350bd6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1', 'scsi-SQEMU_QEMU_HARDDISK_95350bd6-b245-44d1-bed2-d3debca83b15-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.822133 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.822151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.822164 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL', 'dm-uuid-CRYPT-LUKS2-dab48e76bd2640e2b0568f58a903c67b-71eR21-JxQl-gIFL-FoMT-ECyY-LYHc-wxnfxL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:52:21.822178 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:52:21.822191 | orchestrator | 2026-02-17 06:52:21.822203 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:52:21.822215 | orchestrator | Tuesday 17 February 2026 06:52:17 +0000 (0:00:01.527) 1:05:32.981 ****** 2026-02-17 06:52:21.822227 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:21.822239 | orchestrator | 2026-02-17 06:52:21.822250 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:52:21.822261 | orchestrator | Tuesday 17 February 2026 06:52:19 +0000 (0:00:01.475) 1:05:34.457 ****** 2026-02-17 06:52:21.822272 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:21.822284 | orchestrator | 2026-02-17 06:52:21.822297 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:52:21.822310 | orchestrator | Tuesday 17 February 2026 06:52:20 +0000 (0:00:01.167) 1:05:35.624 ****** 2026-02-17 06:52:21.822322 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:52:21.822334 | orchestrator | 2026-02-17 06:52:21.822353 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:52:21.822393 | orchestrator | Tuesday 17 February 2026 06:52:21 +0000 (0:00:01.458) 1:05:37.083 ****** 2026-02-17 06:53:04.015196 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.015329 | orchestrator | 2026-02-17 06:53:04.015346 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:53:04.015360 | orchestrator | Tuesday 17 February 2026 06:52:22 +0000 (0:00:01.181) 1:05:38.264 ****** 2026-02-17 06:53:04.015371 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.015383 | orchestrator | 2026-02-17 06:53:04.015464 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:53:04.015487 | orchestrator | Tuesday 17 February 2026 06:52:24 +0000 (0:00:01.282) 1:05:39.546 ****** 2026-02-17 06:53:04.015507 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.015525 | orchestrator | 2026-02-17 06:53:04.015543 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:53:04.015562 | orchestrator | Tuesday 17 February 2026 06:52:25 +0000 (0:00:01.187) 1:05:40.733 ****** 2026-02-17 06:53:04.015583 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-17 06:53:04.015601 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-17 06:53:04.015617 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-17 06:53:04.015628 | orchestrator | 2026-02-17 06:53:04.015639 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:53:04.015651 | orchestrator | Tuesday 17 February 2026 06:52:27 +0000 (0:00:01.733) 1:05:42.467 ****** 2026-02-17 06:53:04.015662 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-17 06:53:04.015673 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-17 06:53:04.015685 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-17 06:53:04.015696 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.015707 | orchestrator | 2026-02-17 06:53:04.015722 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:53:04.015735 | orchestrator | Tuesday 17 February 2026 06:52:28 +0000 (0:00:01.146) 1:05:43.613 ****** 2026-02-17 06:53:04.015748 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-17 06:53:04.015762 | orchestrator | 2026-02-17 06:53:04.015777 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:53:04.015790 | orchestrator | Tuesday 17 February 2026 06:52:29 +0000 (0:00:01.142) 1:05:44.756 ****** 2026-02-17 06:53:04.015801 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.015813 | orchestrator | 2026-02-17 06:53:04.015823 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:53:04.015836 | orchestrator | Tuesday 17 February 2026 06:52:30 +0000 (0:00:01.145) 1:05:45.902 ****** 2026-02-17 06:53:04.015855 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.015873 | orchestrator | 2026-02-17 06:53:04.015892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:53:04.015911 | orchestrator | Tuesday 17 February 2026 06:52:31 +0000 (0:00:01.168) 1:05:47.070 ****** 2026-02-17 06:53:04.015929 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.015949 | orchestrator | 2026-02-17 06:53:04.015967 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:53:04.015985 | orchestrator | Tuesday 17 February 2026 06:52:33 +0000 (0:00:01.207) 1:05:48.278 ****** 2026-02-17 06:53:04.015996 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:04.016008 | orchestrator | 2026-02-17 06:53:04.016019 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:53:04.016030 | orchestrator | Tuesday 17 February 2026 06:52:34 +0000 (0:00:01.283) 1:05:49.562 ****** 2026-02-17 06:53:04.016058 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:53:04.016070 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:53:04.016081 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:53:04.016116 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.016128 | orchestrator | 2026-02-17 06:53:04.016139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:53:04.016150 | orchestrator | Tuesday 17 February 2026 06:52:35 +0000 (0:00:01.481) 1:05:51.043 ****** 2026-02-17 06:53:04.016161 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:53:04.016172 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:53:04.016186 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:53:04.016203 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.016221 | orchestrator | 2026-02-17 06:53:04.016240 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:53:04.016257 | orchestrator | Tuesday 17 February 2026 06:52:37 +0000 (0:00:01.424) 1:05:52.467 ****** 2026-02-17 06:53:04.016275 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:53:04.016294 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:53:04.016313 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:53:04.016330 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.016348 | orchestrator | 2026-02-17 06:53:04.016367 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:53:04.016386 | orchestrator | Tuesday 17 February 2026 06:52:38 +0000 (0:00:01.439) 1:05:53.907 ****** 2026-02-17 06:53:04.016434 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:04.016451 | orchestrator | 2026-02-17 06:53:04.016462 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:53:04.016473 | orchestrator | Tuesday 17 February 2026 06:52:39 +0000 (0:00:01.169) 1:05:55.076 ****** 2026-02-17 06:53:04.016484 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 06:53:04.016495 | orchestrator | 2026-02-17 06:53:04.016506 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:53:04.016518 | orchestrator | Tuesday 17 February 2026 06:52:41 +0000 (0:00:01.431) 1:05:56.508 ****** 2026-02-17 06:53:04.016549 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:53:04.016561 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:53:04.016572 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:53:04.016583 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:53:04.016594 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-17 06:53:04.016605 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:53:04.016616 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:53:04.016627 | orchestrator | 2026-02-17 06:53:04.016638 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:53:04.016649 | orchestrator | Tuesday 17 February 2026 06:52:43 +0000 (0:00:02.259) 1:05:58.767 ****** 2026-02-17 06:53:04.016666 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:53:04.016685 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:53:04.016703 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:53:04.016722 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:53:04.016740 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-17 06:53:04.016760 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-17 06:53:04.016774 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:53:04.016785 | orchestrator | 2026-02-17 06:53:04.016809 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-17 06:53:04.016820 | orchestrator | Tuesday 17 February 2026 06:52:45 +0000 (0:00:02.351) 1:06:01.119 ****** 2026-02-17 06:53:04.016831 | orchestrator | changed: [testbed-node-4] 2026-02-17 06:53:04.016842 | orchestrator | 2026-02-17 06:53:04.016853 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-17 06:53:04.016864 | orchestrator | Tuesday 17 February 2026 06:52:47 +0000 (0:00:01.946) 1:06:03.066 ****** 2026-02-17 06:53:04.016875 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:53:04.016886 | orchestrator | 2026-02-17 06:53:04.016897 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-17 06:53:04.016908 | orchestrator | Tuesday 17 February 2026 06:52:50 +0000 (0:00:02.458) 1:06:05.524 ****** 2026-02-17 06:53:04.016919 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:53:04.016930 | orchestrator | 2026-02-17 06:53:04.016941 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:53:04.016952 | orchestrator | Tuesday 17 February 2026 06:52:52 +0000 (0:00:01.930) 1:06:07.455 ****** 2026-02-17 06:53:04.016963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-17 06:53:04.016974 | orchestrator | 2026-02-17 06:53:04.016985 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:53:04.017004 | orchestrator | Tuesday 17 February 2026 06:52:53 +0000 (0:00:01.425) 1:06:08.881 ****** 2026-02-17 06:53:04.017015 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-17 06:53:04.017028 | orchestrator | 2026-02-17 06:53:04.017047 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:53:04.017066 | orchestrator | Tuesday 17 February 2026 06:52:54 +0000 (0:00:01.131) 1:06:10.012 ****** 2026-02-17 06:53:04.017084 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.017103 | orchestrator | 2026-02-17 06:53:04.017121 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:53:04.017139 | orchestrator | Tuesday 17 February 2026 06:52:55 +0000 (0:00:01.142) 1:06:11.155 ****** 2026-02-17 06:53:04.017159 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:04.017176 | orchestrator | 2026-02-17 06:53:04.017195 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:53:04.017212 | orchestrator | Tuesday 17 February 2026 06:52:57 +0000 (0:00:01.539) 1:06:12.695 ****** 2026-02-17 06:53:04.017228 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:04.017246 | orchestrator | 2026-02-17 06:53:04.017263 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:53:04.017281 | orchestrator | Tuesday 17 February 2026 06:52:59 +0000 (0:00:01.586) 1:06:14.282 ****** 2026-02-17 06:53:04.017299 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:04.017317 | orchestrator | 2026-02-17 06:53:04.017336 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:53:04.017354 | orchestrator | Tuesday 17 February 2026 06:53:00 +0000 (0:00:01.547) 1:06:15.829 ****** 2026-02-17 06:53:04.017371 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.017390 | orchestrator | 2026-02-17 06:53:04.017453 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:53:04.017474 | orchestrator | Tuesday 17 February 2026 06:53:01 +0000 (0:00:01.191) 1:06:17.021 ****** 2026-02-17 06:53:04.017493 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.017513 | orchestrator | 2026-02-17 06:53:04.017533 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:53:04.017552 | orchestrator | Tuesday 17 February 2026 06:53:02 +0000 (0:00:01.126) 1:06:18.147 ****** 2026-02-17 06:53:04.017571 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:04.017604 | orchestrator | 2026-02-17 06:53:04.017623 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:53:04.017659 | orchestrator | Tuesday 17 February 2026 06:53:03 +0000 (0:00:01.124) 1:06:19.271 ****** 2026-02-17 06:53:44.478327 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.478510 | orchestrator | 2026-02-17 06:53:44.478536 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:53:44.478557 | orchestrator | Tuesday 17 February 2026 06:53:05 +0000 (0:00:01.574) 1:06:20.846 ****** 2026-02-17 06:53:44.478576 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.478594 | orchestrator | 2026-02-17 06:53:44.478613 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:53:44.478631 | orchestrator | Tuesday 17 February 2026 06:53:07 +0000 (0:00:01.532) 1:06:22.378 ****** 2026-02-17 06:53:44.478649 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.478669 | orchestrator | 2026-02-17 06:53:44.478687 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:53:44.478705 | orchestrator | Tuesday 17 February 2026 06:53:07 +0000 (0:00:00.796) 1:06:23.175 ****** 2026-02-17 06:53:44.478723 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.478741 | orchestrator | 2026-02-17 06:53:44.478759 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:53:44.478777 | orchestrator | Tuesday 17 February 2026 06:53:08 +0000 (0:00:00.867) 1:06:24.043 ****** 2026-02-17 06:53:44.478795 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.478813 | orchestrator | 2026-02-17 06:53:44.478830 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:53:44.478848 | orchestrator | Tuesday 17 February 2026 06:53:09 +0000 (0:00:00.784) 1:06:24.828 ****** 2026-02-17 06:53:44.478866 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.478884 | orchestrator | 2026-02-17 06:53:44.478902 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:53:44.478920 | orchestrator | Tuesday 17 February 2026 06:53:10 +0000 (0:00:00.847) 1:06:25.675 ****** 2026-02-17 06:53:44.478938 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.478956 | orchestrator | 2026-02-17 06:53:44.478974 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:53:44.478992 | orchestrator | Tuesday 17 February 2026 06:53:11 +0000 (0:00:00.779) 1:06:26.455 ****** 2026-02-17 06:53:44.479010 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479028 | orchestrator | 2026-02-17 06:53:44.479046 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:53:44.479064 | orchestrator | Tuesday 17 February 2026 06:53:11 +0000 (0:00:00.802) 1:06:27.257 ****** 2026-02-17 06:53:44.479082 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479100 | orchestrator | 2026-02-17 06:53:44.479118 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:53:44.479136 | orchestrator | Tuesday 17 February 2026 06:53:12 +0000 (0:00:00.797) 1:06:28.055 ****** 2026-02-17 06:53:44.479154 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479172 | orchestrator | 2026-02-17 06:53:44.479190 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:53:44.479208 | orchestrator | Tuesday 17 February 2026 06:53:13 +0000 (0:00:00.777) 1:06:28.832 ****** 2026-02-17 06:53:44.479226 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.479244 | orchestrator | 2026-02-17 06:53:44.479262 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:53:44.479280 | orchestrator | Tuesday 17 February 2026 06:53:14 +0000 (0:00:00.823) 1:06:29.655 ****** 2026-02-17 06:53:44.479298 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.479316 | orchestrator | 2026-02-17 06:53:44.479334 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:53:44.479352 | orchestrator | Tuesday 17 February 2026 06:53:15 +0000 (0:00:00.778) 1:06:30.434 ****** 2026-02-17 06:53:44.479370 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479415 | orchestrator | 2026-02-17 06:53:44.479483 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:53:44.479503 | orchestrator | Tuesday 17 February 2026 06:53:15 +0000 (0:00:00.767) 1:06:31.201 ****** 2026-02-17 06:53:44.479522 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479542 | orchestrator | 2026-02-17 06:53:44.479561 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:53:44.479580 | orchestrator | Tuesday 17 February 2026 06:53:16 +0000 (0:00:00.788) 1:06:31.990 ****** 2026-02-17 06:53:44.479600 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479619 | orchestrator | 2026-02-17 06:53:44.479637 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:53:44.479656 | orchestrator | Tuesday 17 February 2026 06:53:17 +0000 (0:00:00.793) 1:06:32.784 ****** 2026-02-17 06:53:44.479676 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479695 | orchestrator | 2026-02-17 06:53:44.479713 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:53:44.479732 | orchestrator | Tuesday 17 February 2026 06:53:18 +0000 (0:00:00.884) 1:06:33.668 ****** 2026-02-17 06:53:44.479750 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479768 | orchestrator | 2026-02-17 06:53:44.479787 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:53:44.479807 | orchestrator | Tuesday 17 February 2026 06:53:19 +0000 (0:00:00.772) 1:06:34.440 ****** 2026-02-17 06:53:44.479825 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479845 | orchestrator | 2026-02-17 06:53:44.479863 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:53:44.479882 | orchestrator | Tuesday 17 February 2026 06:53:19 +0000 (0:00:00.774) 1:06:35.215 ****** 2026-02-17 06:53:44.479901 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479920 | orchestrator | 2026-02-17 06:53:44.479939 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:53:44.479960 | orchestrator | Tuesday 17 February 2026 06:53:20 +0000 (0:00:00.817) 1:06:36.033 ****** 2026-02-17 06:53:44.479978 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.479997 | orchestrator | 2026-02-17 06:53:44.480016 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:53:44.480035 | orchestrator | Tuesday 17 February 2026 06:53:21 +0000 (0:00:00.807) 1:06:36.840 ****** 2026-02-17 06:53:44.480052 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480070 | orchestrator | 2026-02-17 06:53:44.480111 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:53:44.480132 | orchestrator | Tuesday 17 February 2026 06:53:22 +0000 (0:00:00.787) 1:06:37.627 ****** 2026-02-17 06:53:44.480150 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480168 | orchestrator | 2026-02-17 06:53:44.480179 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:53:44.480188 | orchestrator | Tuesday 17 February 2026 06:53:23 +0000 (0:00:00.796) 1:06:38.424 ****** 2026-02-17 06:53:44.480198 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480208 | orchestrator | 2026-02-17 06:53:44.480217 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:53:44.480227 | orchestrator | Tuesday 17 February 2026 06:53:23 +0000 (0:00:00.775) 1:06:39.199 ****** 2026-02-17 06:53:44.480237 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480247 | orchestrator | 2026-02-17 06:53:44.480256 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:53:44.480266 | orchestrator | Tuesday 17 February 2026 06:53:24 +0000 (0:00:00.773) 1:06:39.973 ****** 2026-02-17 06:53:44.480276 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.480285 | orchestrator | 2026-02-17 06:53:44.480297 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:53:44.480313 | orchestrator | Tuesday 17 February 2026 06:53:26 +0000 (0:00:01.628) 1:06:41.602 ****** 2026-02-17 06:53:44.480329 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.480356 | orchestrator | 2026-02-17 06:53:44.480417 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:53:44.480437 | orchestrator | Tuesday 17 February 2026 06:53:28 +0000 (0:00:01.922) 1:06:43.525 ****** 2026-02-17 06:53:44.480453 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-17 06:53:44.480464 | orchestrator | 2026-02-17 06:53:44.480474 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:53:44.480484 | orchestrator | Tuesday 17 February 2026 06:53:29 +0000 (0:00:01.333) 1:06:44.858 ****** 2026-02-17 06:53:44.480493 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480503 | orchestrator | 2026-02-17 06:53:44.480513 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:53:44.480523 | orchestrator | Tuesday 17 February 2026 06:53:30 +0000 (0:00:01.142) 1:06:46.001 ****** 2026-02-17 06:53:44.480532 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480542 | orchestrator | 2026-02-17 06:53:44.480551 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:53:44.480561 | orchestrator | Tuesday 17 February 2026 06:53:31 +0000 (0:00:01.150) 1:06:47.151 ****** 2026-02-17 06:53:44.480571 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:53:44.480580 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:53:44.480590 | orchestrator | 2026-02-17 06:53:44.480600 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:53:44.480609 | orchestrator | Tuesday 17 February 2026 06:53:33 +0000 (0:00:01.864) 1:06:49.016 ****** 2026-02-17 06:53:44.480619 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.480628 | orchestrator | 2026-02-17 06:53:44.480638 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:53:44.480648 | orchestrator | Tuesday 17 February 2026 06:53:35 +0000 (0:00:01.474) 1:06:50.491 ****** 2026-02-17 06:53:44.480658 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480667 | orchestrator | 2026-02-17 06:53:44.480677 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:53:44.480694 | orchestrator | Tuesday 17 February 2026 06:53:36 +0000 (0:00:01.146) 1:06:51.638 ****** 2026-02-17 06:53:44.480704 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480714 | orchestrator | 2026-02-17 06:53:44.480723 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:53:44.480733 | orchestrator | Tuesday 17 February 2026 06:53:37 +0000 (0:00:00.810) 1:06:52.449 ****** 2026-02-17 06:53:44.480743 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480752 | orchestrator | 2026-02-17 06:53:44.480762 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:53:44.480772 | orchestrator | Tuesday 17 February 2026 06:53:37 +0000 (0:00:00.806) 1:06:53.255 ****** 2026-02-17 06:53:44.480781 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-17 06:53:44.480790 | orchestrator | 2026-02-17 06:53:44.480800 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:53:44.480810 | orchestrator | Tuesday 17 February 2026 06:53:39 +0000 (0:00:01.164) 1:06:54.420 ****** 2026-02-17 06:53:44.480820 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:53:44.480829 | orchestrator | 2026-02-17 06:53:44.480839 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:53:44.480848 | orchestrator | Tuesday 17 February 2026 06:53:40 +0000 (0:00:01.716) 1:06:56.137 ****** 2026-02-17 06:53:44.480858 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:53:44.480868 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:53:44.480877 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:53:44.480887 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480904 | orchestrator | 2026-02-17 06:53:44.480914 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:53:44.480924 | orchestrator | Tuesday 17 February 2026 06:53:42 +0000 (0:00:01.191) 1:06:57.328 ****** 2026-02-17 06:53:44.480933 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480943 | orchestrator | 2026-02-17 06:53:44.480953 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:53:44.480962 | orchestrator | Tuesday 17 February 2026 06:53:43 +0000 (0:00:01.117) 1:06:58.445 ****** 2026-02-17 06:53:44.480972 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:53:44.480981 | orchestrator | 2026-02-17 06:53:44.480999 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:54:27.897949 | orchestrator | Tuesday 17 February 2026 06:53:44 +0000 (0:00:01.293) 1:06:59.739 ****** 2026-02-17 06:54:27.898117 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898135 | orchestrator | 2026-02-17 06:54:27.898149 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:54:27.898161 | orchestrator | Tuesday 17 February 2026 06:53:45 +0000 (0:00:01.162) 1:07:00.901 ****** 2026-02-17 06:54:27.898172 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898183 | orchestrator | 2026-02-17 06:54:27.898194 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:54:27.898205 | orchestrator | Tuesday 17 February 2026 06:53:46 +0000 (0:00:01.178) 1:07:02.080 ****** 2026-02-17 06:54:27.898216 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898227 | orchestrator | 2026-02-17 06:54:27.898238 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:54:27.898249 | orchestrator | Tuesday 17 February 2026 06:53:47 +0000 (0:00:00.855) 1:07:02.936 ****** 2026-02-17 06:54:27.898260 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:54:27.898272 | orchestrator | 2026-02-17 06:54:27.898283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:54:27.898294 | orchestrator | Tuesday 17 February 2026 06:53:49 +0000 (0:00:02.238) 1:07:05.175 ****** 2026-02-17 06:54:27.898305 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:54:27.898316 | orchestrator | 2026-02-17 06:54:27.898327 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:54:27.898338 | orchestrator | Tuesday 17 February 2026 06:53:50 +0000 (0:00:00.828) 1:07:06.003 ****** 2026-02-17 06:54:27.898403 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-17 06:54:27.898416 | orchestrator | 2026-02-17 06:54:27.898427 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:54:27.898438 | orchestrator | Tuesday 17 February 2026 06:53:51 +0000 (0:00:01.264) 1:07:07.268 ****** 2026-02-17 06:54:27.898449 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898460 | orchestrator | 2026-02-17 06:54:27.898471 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:54:27.898483 | orchestrator | Tuesday 17 February 2026 06:53:53 +0000 (0:00:01.189) 1:07:08.457 ****** 2026-02-17 06:54:27.898496 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898508 | orchestrator | 2026-02-17 06:54:27.898521 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:54:27.898534 | orchestrator | Tuesday 17 February 2026 06:53:54 +0000 (0:00:01.166) 1:07:09.624 ****** 2026-02-17 06:54:27.898554 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898572 | orchestrator | 2026-02-17 06:54:27.898592 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:54:27.898613 | orchestrator | Tuesday 17 February 2026 06:53:55 +0000 (0:00:01.151) 1:07:10.776 ****** 2026-02-17 06:54:27.898633 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898651 | orchestrator | 2026-02-17 06:54:27.898664 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:54:27.898676 | orchestrator | Tuesday 17 February 2026 06:53:56 +0000 (0:00:01.188) 1:07:11.965 ****** 2026-02-17 06:54:27.898713 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898726 | orchestrator | 2026-02-17 06:54:27.898739 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:54:27.898752 | orchestrator | Tuesday 17 February 2026 06:53:57 +0000 (0:00:01.217) 1:07:13.182 ****** 2026-02-17 06:54:27.898777 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898790 | orchestrator | 2026-02-17 06:54:27.898802 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:54:27.898815 | orchestrator | Tuesday 17 February 2026 06:53:59 +0000 (0:00:01.201) 1:07:14.383 ****** 2026-02-17 06:54:27.898827 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898839 | orchestrator | 2026-02-17 06:54:27.898850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:54:27.898861 | orchestrator | Tuesday 17 February 2026 06:54:00 +0000 (0:00:01.313) 1:07:15.697 ****** 2026-02-17 06:54:27.898872 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.898883 | orchestrator | 2026-02-17 06:54:27.898894 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:54:27.898905 | orchestrator | Tuesday 17 February 2026 06:54:01 +0000 (0:00:01.174) 1:07:16.871 ****** 2026-02-17 06:54:27.898916 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:54:27.898927 | orchestrator | 2026-02-17 06:54:27.898938 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:54:27.898949 | orchestrator | Tuesday 17 February 2026 06:54:02 +0000 (0:00:00.803) 1:07:17.675 ****** 2026-02-17 06:54:27.898960 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-17 06:54:27.898972 | orchestrator | 2026-02-17 06:54:27.898983 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:54:27.898994 | orchestrator | Tuesday 17 February 2026 06:54:03 +0000 (0:00:01.117) 1:07:18.793 ****** 2026-02-17 06:54:27.899005 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-17 06:54:27.899016 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-17 06:54:27.899027 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-17 06:54:27.899037 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-17 06:54:27.899048 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-17 06:54:27.899058 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-17 06:54:27.899069 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-17 06:54:27.899080 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:54:27.899090 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:54:27.899101 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:54:27.899112 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:54:27.899141 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:54:27.899152 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:54:27.899163 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:54:27.899174 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-17 06:54:27.899185 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-17 06:54:27.899196 | orchestrator | 2026-02-17 06:54:27.899207 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:54:27.899218 | orchestrator | Tuesday 17 February 2026 06:54:09 +0000 (0:00:06.356) 1:07:25.150 ****** 2026-02-17 06:54:27.899229 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-17 06:54:27.899240 | orchestrator | 2026-02-17 06:54:27.899251 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 06:54:27.899262 | orchestrator | Tuesday 17 February 2026 06:54:11 +0000 (0:00:01.154) 1:07:26.304 ****** 2026-02-17 06:54:27.899281 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:54:27.899293 | orchestrator | 2026-02-17 06:54:27.899305 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 06:54:27.899316 | orchestrator | Tuesday 17 February 2026 06:54:12 +0000 (0:00:01.548) 1:07:27.852 ****** 2026-02-17 06:54:27.899327 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:54:27.899338 | orchestrator | 2026-02-17 06:54:27.899349 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:54:27.899381 | orchestrator | Tuesday 17 February 2026 06:54:14 +0000 (0:00:01.651) 1:07:29.504 ****** 2026-02-17 06:54:27.899392 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899402 | orchestrator | 2026-02-17 06:54:27.899413 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:54:27.899424 | orchestrator | Tuesday 17 February 2026 06:54:15 +0000 (0:00:00.799) 1:07:30.303 ****** 2026-02-17 06:54:27.899434 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899445 | orchestrator | 2026-02-17 06:54:27.899456 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:54:27.899474 | orchestrator | Tuesday 17 February 2026 06:54:15 +0000 (0:00:00.794) 1:07:31.097 ****** 2026-02-17 06:54:27.899493 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899511 | orchestrator | 2026-02-17 06:54:27.899528 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:54:27.899547 | orchestrator | Tuesday 17 February 2026 06:54:16 +0000 (0:00:00.778) 1:07:31.876 ****** 2026-02-17 06:54:27.899567 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899586 | orchestrator | 2026-02-17 06:54:27.899605 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:54:27.899626 | orchestrator | Tuesday 17 February 2026 06:54:17 +0000 (0:00:00.784) 1:07:32.660 ****** 2026-02-17 06:54:27.899644 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899665 | orchestrator | 2026-02-17 06:54:27.899679 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:54:27.899690 | orchestrator | Tuesday 17 February 2026 06:54:18 +0000 (0:00:00.787) 1:07:33.448 ****** 2026-02-17 06:54:27.899708 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899719 | orchestrator | 2026-02-17 06:54:27.899730 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:54:27.899741 | orchestrator | Tuesday 17 February 2026 06:54:18 +0000 (0:00:00.780) 1:07:34.229 ****** 2026-02-17 06:54:27.899751 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899762 | orchestrator | 2026-02-17 06:54:27.899773 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:54:27.899784 | orchestrator | Tuesday 17 February 2026 06:54:19 +0000 (0:00:00.796) 1:07:35.025 ****** 2026-02-17 06:54:27.899794 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899806 | orchestrator | 2026-02-17 06:54:27.899816 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:54:27.899827 | orchestrator | Tuesday 17 February 2026 06:54:20 +0000 (0:00:00.831) 1:07:35.857 ****** 2026-02-17 06:54:27.899838 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899849 | orchestrator | 2026-02-17 06:54:27.899859 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:54:27.899870 | orchestrator | Tuesday 17 February 2026 06:54:21 +0000 (0:00:00.844) 1:07:36.702 ****** 2026-02-17 06:54:27.899881 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899894 | orchestrator | 2026-02-17 06:54:27.899912 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:54:27.899931 | orchestrator | Tuesday 17 February 2026 06:54:22 +0000 (0:00:00.792) 1:07:37.495 ****** 2026-02-17 06:54:27.899959 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:54:27.899975 | orchestrator | 2026-02-17 06:54:27.899987 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:54:27.899998 | orchestrator | Tuesday 17 February 2026 06:54:23 +0000 (0:00:00.825) 1:07:38.320 ****** 2026-02-17 06:54:27.900008 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:54:27.900019 | orchestrator | 2026-02-17 06:54:27.900030 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:54:27.900069 | orchestrator | Tuesday 17 February 2026 06:54:27 +0000 (0:00:04.004) 1:07:42.325 ****** 2026-02-17 06:54:27.900088 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:54:27.900108 | orchestrator | 2026-02-17 06:54:27.900140 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:55:09.195060 | orchestrator | Tuesday 17 February 2026 06:54:27 +0000 (0:00:00.831) 1:07:43.157 ****** 2026-02-17 06:55:09.195205 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-17 06:55:09.195237 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-17 06:55:09.195259 | orchestrator | 2026-02-17 06:55:09.195278 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:55:09.195296 | orchestrator | Tuesday 17 February 2026 06:54:32 +0000 (0:00:04.531) 1:07:47.688 ****** 2026-02-17 06:55:09.195314 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.195405 | orchestrator | 2026-02-17 06:55:09.195428 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:55:09.195448 | orchestrator | Tuesday 17 February 2026 06:54:33 +0000 (0:00:00.788) 1:07:48.477 ****** 2026-02-17 06:55:09.195460 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.195471 | orchestrator | 2026-02-17 06:55:09.195483 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:55:09.195496 | orchestrator | Tuesday 17 February 2026 06:54:34 +0000 (0:00:00.850) 1:07:49.328 ****** 2026-02-17 06:55:09.195507 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.195518 | orchestrator | 2026-02-17 06:55:09.195529 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:55:09.195540 | orchestrator | Tuesday 17 February 2026 06:54:34 +0000 (0:00:00.788) 1:07:50.116 ****** 2026-02-17 06:55:09.195551 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.195562 | orchestrator | 2026-02-17 06:55:09.195575 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:55:09.195589 | orchestrator | Tuesday 17 February 2026 06:54:35 +0000 (0:00:00.802) 1:07:50.919 ****** 2026-02-17 06:55:09.195601 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.195614 | orchestrator | 2026-02-17 06:55:09.195628 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:55:09.195641 | orchestrator | Tuesday 17 February 2026 06:54:36 +0000 (0:00:00.872) 1:07:51.791 ****** 2026-02-17 06:55:09.195654 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:55:09.195667 | orchestrator | 2026-02-17 06:55:09.195680 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:55:09.195693 | orchestrator | Tuesday 17 February 2026 06:54:37 +0000 (0:00:00.906) 1:07:52.698 ****** 2026-02-17 06:55:09.195706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:55:09.195744 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:55:09.195773 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:55:09.195786 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.195799 | orchestrator | 2026-02-17 06:55:09.195812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:55:09.195825 | orchestrator | Tuesday 17 February 2026 06:54:38 +0000 (0:00:01.098) 1:07:53.796 ****** 2026-02-17 06:55:09.195837 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:55:09.195849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:55:09.195862 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:55:09.195874 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.195887 | orchestrator | 2026-02-17 06:55:09.195899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:55:09.195912 | orchestrator | Tuesday 17 February 2026 06:54:39 +0000 (0:00:01.128) 1:07:54.925 ****** 2026-02-17 06:55:09.195925 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-17 06:55:09.195937 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-17 06:55:09.195951 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-17 06:55:09.195963 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.195975 | orchestrator | 2026-02-17 06:55:09.195986 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:55:09.195997 | orchestrator | Tuesday 17 February 2026 06:54:40 +0000 (0:00:01.087) 1:07:56.013 ****** 2026-02-17 06:55:09.196008 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:55:09.196019 | orchestrator | 2026-02-17 06:55:09.196030 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:55:09.196041 | orchestrator | Tuesday 17 February 2026 06:54:41 +0000 (0:00:00.859) 1:07:56.873 ****** 2026-02-17 06:55:09.196052 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-17 06:55:09.196063 | orchestrator | 2026-02-17 06:55:09.196074 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 06:55:09.196085 | orchestrator | Tuesday 17 February 2026 06:54:42 +0000 (0:00:01.031) 1:07:57.905 ****** 2026-02-17 06:55:09.196096 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:55:09.196107 | orchestrator | 2026-02-17 06:55:09.196118 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-17 06:55:09.196129 | orchestrator | Tuesday 17 February 2026 06:54:44 +0000 (0:00:01.441) 1:07:59.346 ****** 2026-02-17 06:55:09.196140 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-17 06:55:09.196151 | orchestrator | 2026-02-17 06:55:09.196182 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-17 06:55:09.196194 | orchestrator | Tuesday 17 February 2026 06:54:45 +0000 (0:00:01.306) 1:08:00.652 ****** 2026-02-17 06:55:09.196204 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:55:09.196215 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 06:55:09.196226 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:55:09.196238 | orchestrator | 2026-02-17 06:55:09.196248 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:55:09.196259 | orchestrator | Tuesday 17 February 2026 06:54:48 +0000 (0:00:03.163) 1:08:03.816 ****** 2026-02-17 06:55:09.196270 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-17 06:55:09.196281 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-17 06:55:09.196292 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:55:09.196304 | orchestrator | 2026-02-17 06:55:09.196315 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-17 06:55:09.196326 | orchestrator | Tuesday 17 February 2026 06:54:50 +0000 (0:00:01.976) 1:08:05.793 ****** 2026-02-17 06:55:09.196358 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.196378 | orchestrator | 2026-02-17 06:55:09.196389 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-17 06:55:09.196400 | orchestrator | Tuesday 17 February 2026 06:54:51 +0000 (0:00:00.762) 1:08:06.556 ****** 2026-02-17 06:55:09.196411 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-17 06:55:09.196422 | orchestrator | 2026-02-17 06:55:09.196433 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-17 06:55:09.196444 | orchestrator | Tuesday 17 February 2026 06:54:52 +0000 (0:00:01.169) 1:08:07.725 ****** 2026-02-17 06:55:09.196455 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:55:09.196467 | orchestrator | 2026-02-17 06:55:09.196478 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-17 06:55:09.196488 | orchestrator | Tuesday 17 February 2026 06:54:54 +0000 (0:00:01.717) 1:08:09.443 ****** 2026-02-17 06:55:09.196499 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:55:09.196509 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-17 06:55:09.196520 | orchestrator | 2026-02-17 06:55:09.196531 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-17 06:55:09.196542 | orchestrator | Tuesday 17 February 2026 06:54:59 +0000 (0:00:05.113) 1:08:14.556 ****** 2026-02-17 06:55:09.196553 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 06:55:09.196563 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 06:55:09.196574 | orchestrator | 2026-02-17 06:55:09.196585 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-17 06:55:09.196596 | orchestrator | Tuesday 17 February 2026 06:55:02 +0000 (0:00:03.011) 1:08:17.567 ****** 2026-02-17 06:55:09.196606 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-17 06:55:09.196617 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:55:09.196628 | orchestrator | 2026-02-17 06:55:09.196639 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-17 06:55:09.196650 | orchestrator | Tuesday 17 February 2026 06:55:03 +0000 (0:00:01.649) 1:08:19.217 ****** 2026-02-17 06:55:09.196661 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-17 06:55:09.196672 | orchestrator | 2026-02-17 06:55:09.196683 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-17 06:55:09.196694 | orchestrator | Tuesday 17 February 2026 06:55:05 +0000 (0:00:01.131) 1:08:20.349 ****** 2026-02-17 06:55:09.196705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:55:09.196716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:55:09.196727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:55:09.196738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:55:09.196749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:55:09.196760 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:55:09.196771 | orchestrator | 2026-02-17 06:55:09.196782 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-17 06:55:09.196793 | orchestrator | Tuesday 17 February 2026 06:55:07 +0000 (0:00:01.968) 1:08:22.318 ****** 2026-02-17 06:55:09.196804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:55:09.196821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:55:09.196832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:55:09.196849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:56:17.002664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 06:56:17.002780 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:56:17.002798 | orchestrator | 2026-02-17 06:56:17.002811 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-17 06:56:17.002824 | orchestrator | Tuesday 17 February 2026 06:55:09 +0000 (0:00:02.133) 1:08:24.452 ****** 2026-02-17 06:56:17.002835 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:56:17.002848 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:56:17.002860 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:56:17.002871 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:56:17.002883 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 06:56:17.002894 | orchestrator | 2026-02-17 06:56:17.002905 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-17 06:56:17.002917 | orchestrator | Tuesday 17 February 2026 06:55:40 +0000 (0:00:31.434) 1:08:55.886 ****** 2026-02-17 06:56:17.002928 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:56:17.002940 | orchestrator | 2026-02-17 06:56:17.002950 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-17 06:56:17.002961 | orchestrator | Tuesday 17 February 2026 06:55:41 +0000 (0:00:00.800) 1:08:56.687 ****** 2026-02-17 06:56:17.002972 | orchestrator | skipping: [testbed-node-4] 2026-02-17 06:56:17.002983 | orchestrator | 2026-02-17 06:56:17.002995 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-17 06:56:17.003006 | orchestrator | Tuesday 17 February 2026 06:55:42 +0000 (0:00:00.787) 1:08:57.474 ****** 2026-02-17 06:56:17.003017 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-17 06:56:17.003029 | orchestrator | 2026-02-17 06:56:17.003040 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-17 06:56:17.003096 | orchestrator | Tuesday 17 February 2026 06:55:43 +0000 (0:00:01.169) 1:08:58.644 ****** 2026-02-17 06:56:17.003109 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-17 06:56:17.003120 | orchestrator | 2026-02-17 06:56:17.003132 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-17 06:56:17.003143 | orchestrator | Tuesday 17 February 2026 06:55:44 +0000 (0:00:01.127) 1:08:59.771 ****** 2026-02-17 06:56:17.003154 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:56:17.003166 | orchestrator | 2026-02-17 06:56:17.003182 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-17 06:56:17.003194 | orchestrator | Tuesday 17 February 2026 06:55:46 +0000 (0:00:02.013) 1:09:01.785 ****** 2026-02-17 06:56:17.003207 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:56:17.003220 | orchestrator | 2026-02-17 06:56:17.003233 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-17 06:56:17.003268 | orchestrator | Tuesday 17 February 2026 06:55:48 +0000 (0:00:02.307) 1:09:04.093 ****** 2026-02-17 06:56:17.003283 | orchestrator | ok: [testbed-node-4] 2026-02-17 06:56:17.003296 | orchestrator | 2026-02-17 06:56:17.003361 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-17 06:56:17.003379 | orchestrator | Tuesday 17 February 2026 06:55:51 +0000 (0:00:02.234) 1:09:06.327 ****** 2026-02-17 06:56:17.003398 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-17 06:56:17.003416 | orchestrator | 2026-02-17 06:56:17.003436 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-17 06:56:17.003457 | orchestrator | 2026-02-17 06:56:17.003474 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 06:56:17.003486 | orchestrator | Tuesday 17 February 2026 06:55:54 +0000 (0:00:03.241) 1:09:09.569 ****** 2026-02-17 06:56:17.003499 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-17 06:56:17.003511 | orchestrator | 2026-02-17 06:56:17.003524 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-17 06:56:17.003536 | orchestrator | Tuesday 17 February 2026 06:55:55 +0000 (0:00:01.135) 1:09:10.704 ****** 2026-02-17 06:56:17.003548 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:17.003560 | orchestrator | 2026-02-17 06:56:17.003571 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-17 06:56:17.003582 | orchestrator | Tuesday 17 February 2026 06:55:56 +0000 (0:00:01.458) 1:09:12.162 ****** 2026-02-17 06:56:17.003593 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:17.003603 | orchestrator | 2026-02-17 06:56:17.003614 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 06:56:17.003625 | orchestrator | Tuesday 17 February 2026 06:55:58 +0000 (0:00:01.162) 1:09:13.325 ****** 2026-02-17 06:56:17.003636 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:17.003647 | orchestrator | 2026-02-17 06:56:17.003658 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 06:56:17.003669 | orchestrator | Tuesday 17 February 2026 06:55:59 +0000 (0:00:01.441) 1:09:14.767 ****** 2026-02-17 06:56:17.003679 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:17.003691 | orchestrator | 2026-02-17 06:56:17.003719 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-17 06:56:17.003731 | orchestrator | Tuesday 17 February 2026 06:56:00 +0000 (0:00:01.186) 1:09:15.953 ****** 2026-02-17 06:56:17.003742 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:17.003753 | orchestrator | 2026-02-17 06:56:17.003764 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-17 06:56:17.003775 | orchestrator | Tuesday 17 February 2026 06:56:01 +0000 (0:00:01.125) 1:09:17.079 ****** 2026-02-17 06:56:17.003786 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:17.003797 | orchestrator | 2026-02-17 06:56:17.003808 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-17 06:56:17.003819 | orchestrator | Tuesday 17 February 2026 06:56:02 +0000 (0:00:01.186) 1:09:18.266 ****** 2026-02-17 06:56:17.003830 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:17.003841 | orchestrator | 2026-02-17 06:56:17.003852 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-17 06:56:17.003863 | orchestrator | Tuesday 17 February 2026 06:56:04 +0000 (0:00:01.171) 1:09:19.438 ****** 2026-02-17 06:56:17.003873 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:17.003884 | orchestrator | 2026-02-17 06:56:17.003895 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-17 06:56:17.003906 | orchestrator | Tuesday 17 February 2026 06:56:05 +0000 (0:00:01.179) 1:09:20.617 ****** 2026-02-17 06:56:17.003917 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:56:17.003928 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:56:17.003948 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:56:17.003959 | orchestrator | 2026-02-17 06:56:17.003970 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-17 06:56:17.003981 | orchestrator | Tuesday 17 February 2026 06:56:07 +0000 (0:00:02.040) 1:09:22.657 ****** 2026-02-17 06:56:17.003992 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:17.004003 | orchestrator | 2026-02-17 06:56:17.004014 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-17 06:56:17.004025 | orchestrator | Tuesday 17 February 2026 06:56:09 +0000 (0:00:01.703) 1:09:24.361 ****** 2026-02-17 06:56:17.004036 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:56:17.004047 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:56:17.004058 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:56:17.004068 | orchestrator | 2026-02-17 06:56:17.004079 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-17 06:56:17.004090 | orchestrator | Tuesday 17 February 2026 06:56:12 +0000 (0:00:03.436) 1:09:27.797 ****** 2026-02-17 06:56:17.004101 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 06:56:17.004112 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 06:56:17.004123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 06:56:17.004134 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:17.004145 | orchestrator | 2026-02-17 06:56:17.004163 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-17 06:56:17.004174 | orchestrator | Tuesday 17 February 2026 06:56:14 +0000 (0:00:01.479) 1:09:29.277 ****** 2026-02-17 06:56:17.004187 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-17 06:56:17.004201 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-17 06:56:17.004212 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-17 06:56:17.004224 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:17.004235 | orchestrator | 2026-02-17 06:56:17.004246 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-17 06:56:17.004256 | orchestrator | Tuesday 17 February 2026 06:56:15 +0000 (0:00:01.733) 1:09:31.010 ****** 2026-02-17 06:56:17.004269 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:17.004290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:36.247330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:36.247458 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:36.247474 | orchestrator | 2026-02-17 06:56:36.247486 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-17 06:56:36.247497 | orchestrator | Tuesday 17 February 2026 06:56:16 +0000 (0:00:01.251) 1:09:32.262 ****** 2026-02-17 06:56:36.247509 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '1568ba736cf3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-17 06:56:09.978035', 'end': '2026-02-17 06:56:10.033696', 'delta': '0:00:00.055661', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1568ba736cf3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-17 06:56:36.247523 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'cbad5dbfc2c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-17 06:56:10.542827', 'end': '2026-02-17 06:56:10.598943', 'delta': '0:00:00.056116', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cbad5dbfc2c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-17 06:56:36.247549 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2ed4f07416bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-17 06:56:11.092799', 'end': '2026-02-17 06:56:11.140104', 'delta': '0:00:00.047305', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ed4f07416bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-17 06:56:36.247560 | orchestrator | 2026-02-17 06:56:36.247570 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-17 06:56:36.247580 | orchestrator | Tuesday 17 February 2026 06:56:18 +0000 (0:00:01.265) 1:09:33.527 ****** 2026-02-17 06:56:36.247590 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:36.247600 | orchestrator | 2026-02-17 06:56:36.247610 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-17 06:56:36.247620 | orchestrator | Tuesday 17 February 2026 06:56:19 +0000 (0:00:01.265) 1:09:34.792 ****** 2026-02-17 06:56:36.247629 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:36.247639 | orchestrator | 2026-02-17 06:56:36.247649 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-17 06:56:36.247659 | orchestrator | Tuesday 17 February 2026 06:56:20 +0000 (0:00:01.320) 1:09:36.113 ****** 2026-02-17 06:56:36.247668 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:36.247678 | orchestrator | 2026-02-17 06:56:36.247687 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-17 06:56:36.247697 | orchestrator | Tuesday 17 February 2026 06:56:22 +0000 (0:00:01.180) 1:09:37.294 ****** 2026-02-17 06:56:36.247714 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-17 06:56:36.247724 | orchestrator | 2026-02-17 06:56:36.247734 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:56:36.247744 | orchestrator | Tuesday 17 February 2026 06:56:24 +0000 (0:00:01.982) 1:09:39.277 ****** 2026-02-17 06:56:36.247753 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:36.247763 | orchestrator | 2026-02-17 06:56:36.247773 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-17 06:56:36.247783 | orchestrator | Tuesday 17 February 2026 06:56:25 +0000 (0:00:01.198) 1:09:40.475 ****** 2026-02-17 06:56:36.247807 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:36.247818 | orchestrator | 2026-02-17 06:56:36.247828 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-17 06:56:36.247838 | orchestrator | Tuesday 17 February 2026 06:56:26 +0000 (0:00:01.138) 1:09:41.613 ****** 2026-02-17 06:56:36.247848 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:36.247857 | orchestrator | 2026-02-17 06:56:36.247867 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-17 06:56:36.247877 | orchestrator | Tuesday 17 February 2026 06:56:27 +0000 (0:00:01.291) 1:09:42.905 ****** 2026-02-17 06:56:36.247887 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:36.247896 | orchestrator | 2026-02-17 06:56:36.247906 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-17 06:56:36.247916 | orchestrator | Tuesday 17 February 2026 06:56:28 +0000 (0:00:01.143) 1:09:44.049 ****** 2026-02-17 06:56:36.247925 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:36.247935 | orchestrator | 2026-02-17 06:56:36.247945 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-17 06:56:36.247954 | orchestrator | Tuesday 17 February 2026 06:56:30 +0000 (0:00:01.305) 1:09:45.355 ****** 2026-02-17 06:56:36.247964 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:36.247973 | orchestrator | 2026-02-17 06:56:36.247983 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-17 06:56:36.247993 | orchestrator | Tuesday 17 February 2026 06:56:31 +0000 (0:00:01.208) 1:09:46.563 ****** 2026-02-17 06:56:36.248002 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:36.248012 | orchestrator | 2026-02-17 06:56:36.248022 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-17 06:56:36.248031 | orchestrator | Tuesday 17 February 2026 06:56:32 +0000 (0:00:01.151) 1:09:47.714 ****** 2026-02-17 06:56:36.248041 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:36.248050 | orchestrator | 2026-02-17 06:56:36.248060 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-17 06:56:36.248070 | orchestrator | Tuesday 17 February 2026 06:56:33 +0000 (0:00:01.204) 1:09:48.919 ****** 2026-02-17 06:56:36.248079 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:36.248089 | orchestrator | 2026-02-17 06:56:36.248098 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-17 06:56:36.248109 | orchestrator | Tuesday 17 February 2026 06:56:34 +0000 (0:00:01.150) 1:09:50.069 ****** 2026-02-17 06:56:36.248118 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:36.248128 | orchestrator | 2026-02-17 06:56:36.248138 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-17 06:56:36.248148 | orchestrator | Tuesday 17 February 2026 06:56:35 +0000 (0:00:01.190) 1:09:51.260 ****** 2026-02-17 06:56:36.248158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:56:36.248174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'uuids': ['4833064e-8ca1-479d-a0c0-581ea0d1065c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7']}})  2026-02-17 06:56:36.248192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b093f3ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:56:36.248213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee']}})  2026-02-17 06:56:37.405051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:56:37.405156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:56:37.405174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-17 06:56:37.405188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:56:37.405267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV', 'dm-uuid-CRYPT-LUKS2-f004f31e7c734e098d3470dc55158438-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:56:37.405281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:56:37.405341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'uuids': ['f004f31e-7c73-4e09-8d34-70dc55158438'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV']}})  2026-02-17 06:56:37.405375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841']}})  2026-02-17 06:56:37.405388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:56:37.405410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37d8f58a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-17 06:56:37.405433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:56:37.405445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-17 06:56:37.405463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7', 'dm-uuid-CRYPT-LUKS2-4833064e8ca1479da0c0581ea0d1065c-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-17 06:56:37.624986 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:37.625084 | orchestrator | 2026-02-17 06:56:37.625101 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-17 06:56:37.625132 | orchestrator | Tuesday 17 February 2026 06:56:37 +0000 (0:00:01.407) 1:09:52.668 ****** 2026-02-17 06:56:37.625157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841', 'dm-uuid-LVM-pxaIgRveZAxvMeEpaoAXfzq9sKFKwy1sGbFZPznEkgYiA31hsP4O6bNVA03NehL7'], 'uuids': ['4833064e-8ca1-479d-a0c0-581ea0d1065c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625224 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc', 'scsi-SQEMU_QEMU_HARDDISK_b093f3ae-168d-469e-aca7-9106842051bc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b093f3ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625239 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fJeyDw-CEDS-osKx-iZ31-wssk-ycBs-NEGp2B', 'scsi-0QEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86', 'scsi-SQEMU_QEMU_HARDDISK_d011ea34-b61d-4f0b-ab11-4490cc68cf86'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625286 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-17-02-26-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625368 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV', 'dm-uuid-CRYPT-LUKS2-f004f31e7c734e098d3470dc55158438-VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:37.625411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--415e7a1a--a305--5338--824f--e9750ca5ebee-osd--block--415e7a1a--a305--5338--824f--e9750ca5ebee', 'dm-uuid-LVM-ZSgCV7oez6C3QpYToO5Y42TZtFJK40a3VBvha5bePNh4hReIHRwnT0nHx23eA6dV'], 'uuids': ['f004f31e-7c73-4e09-8d34-70dc55158438'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd011ea34', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VBvha5-bePN-h4hR-eIHR-wnT0-nHx2-3eA6dV']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:50.688418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2CzY8R-gn2i-0I7q-T8UF-tmc1-YTc8-rZGBHn', 'scsi-0QEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d', 'scsi-SQEMU_QEMU_HARDDISK_18a6fd36-4eb2-4c52-9e33-394f78b6cc4d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '18a6fd36', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67fd3cab--24d5--5329--b459--0f3a5a04c841-osd--block--67fd3cab--24d5--5329--b459--0f3a5a04c841']}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:50.688536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:50.688549 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37d8f58a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1', 'scsi-SQEMU_QEMU_HARDDISK_37d8f58a-c342-42fe-9565-ad857c4ec944-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:50.688571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:50.688580 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:50.688598 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7', 'dm-uuid-CRYPT-LUKS2-4833064e8ca1479da0c0581ea0d1065c-GbFZPz-nEkg-YiA3-1hsP-4O6b-NVA0-3NehL7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-17 06:56:50.688607 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:50.688617 | orchestrator | 2026-02-17 06:56:50.688625 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-17 06:56:50.688633 | orchestrator | Tuesday 17 February 2026 06:56:38 +0000 (0:00:01.395) 1:09:54.063 ****** 2026-02-17 06:56:50.688641 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:50.688648 | orchestrator | 2026-02-17 06:56:50.688656 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-17 06:56:50.688663 | orchestrator | Tuesday 17 February 2026 06:56:40 +0000 (0:00:01.509) 1:09:55.573 ****** 2026-02-17 06:56:50.688671 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:50.688678 | orchestrator | 2026-02-17 06:56:50.688685 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:56:50.688692 | orchestrator | Tuesday 17 February 2026 06:56:41 +0000 (0:00:01.129) 1:09:56.702 ****** 2026-02-17 06:56:50.688699 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:56:50.688706 | orchestrator | 2026-02-17 06:56:50.688714 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:56:50.688721 | orchestrator | Tuesday 17 February 2026 06:56:42 +0000 (0:00:01.453) 1:09:58.156 ****** 2026-02-17 06:56:50.688728 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:50.688736 | orchestrator | 2026-02-17 06:56:50.688744 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-17 06:56:50.688751 | orchestrator | Tuesday 17 February 2026 06:56:44 +0000 (0:00:01.137) 1:09:59.293 ****** 2026-02-17 06:56:50.688758 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:50.688766 | orchestrator | 2026-02-17 06:56:50.688773 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-17 06:56:50.688780 | orchestrator | Tuesday 17 February 2026 06:56:45 +0000 (0:00:01.274) 1:10:00.567 ****** 2026-02-17 06:56:50.688787 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:50.688795 | orchestrator | 2026-02-17 06:56:50.688802 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-17 06:56:50.688809 | orchestrator | Tuesday 17 February 2026 06:56:46 +0000 (0:00:01.204) 1:10:01.772 ****** 2026-02-17 06:56:50.688817 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-17 06:56:50.688824 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-17 06:56:50.688831 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-17 06:56:50.688839 | orchestrator | 2026-02-17 06:56:50.688846 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-17 06:56:50.688853 | orchestrator | Tuesday 17 February 2026 06:56:48 +0000 (0:00:01.817) 1:10:03.590 ****** 2026-02-17 06:56:50.688860 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-17 06:56:50.688867 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-17 06:56:50.688880 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-17 06:56:50.688887 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:56:50.688894 | orchestrator | 2026-02-17 06:56:50.688901 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-17 06:56:50.688908 | orchestrator | Tuesday 17 February 2026 06:56:49 +0000 (0:00:01.209) 1:10:04.799 ****** 2026-02-17 06:56:50.688916 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-17 06:56:50.688923 | orchestrator | 2026-02-17 06:56:50.688935 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:57:33.935346 | orchestrator | Tuesday 17 February 2026 06:56:50 +0000 (0:00:01.143) 1:10:05.942 ****** 2026-02-17 06:57:33.935465 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.935482 | orchestrator | 2026-02-17 06:57:33.935496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:57:33.935507 | orchestrator | Tuesday 17 February 2026 06:56:51 +0000 (0:00:01.160) 1:10:07.103 ****** 2026-02-17 06:57:33.935519 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.935530 | orchestrator | 2026-02-17 06:57:33.935541 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 06:57:33.935552 | orchestrator | Tuesday 17 February 2026 06:56:52 +0000 (0:00:01.157) 1:10:08.261 ****** 2026-02-17 06:57:33.935563 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.935574 | orchestrator | 2026-02-17 06:57:33.935585 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 06:57:33.935597 | orchestrator | Tuesday 17 February 2026 06:56:54 +0000 (0:00:01.148) 1:10:09.410 ****** 2026-02-17 06:57:33.935608 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.935619 | orchestrator | 2026-02-17 06:57:33.935630 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 06:57:33.935641 | orchestrator | Tuesday 17 February 2026 06:56:55 +0000 (0:00:01.265) 1:10:10.676 ****** 2026-02-17 06:57:33.935652 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:57:33.935664 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:57:33.935675 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:57:33.935685 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.935696 | orchestrator | 2026-02-17 06:57:33.935708 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 06:57:33.935719 | orchestrator | Tuesday 17 February 2026 06:56:56 +0000 (0:00:01.404) 1:10:12.081 ****** 2026-02-17 06:57:33.935730 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:57:33.935741 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:57:33.935752 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:57:33.935763 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.935774 | orchestrator | 2026-02-17 06:57:33.935801 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 06:57:33.935813 | orchestrator | Tuesday 17 February 2026 06:56:58 +0000 (0:00:01.454) 1:10:13.535 ****** 2026-02-17 06:57:33.935851 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 06:57:33.935878 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 06:57:33.935891 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 06:57:33.935903 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.935916 | orchestrator | 2026-02-17 06:57:33.935929 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 06:57:33.935942 | orchestrator | Tuesday 17 February 2026 06:57:00 +0000 (0:00:01.876) 1:10:15.412 ****** 2026-02-17 06:57:33.935954 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.935966 | orchestrator | 2026-02-17 06:57:33.935979 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 06:57:33.936016 | orchestrator | Tuesday 17 February 2026 06:57:01 +0000 (0:00:01.146) 1:10:16.559 ****** 2026-02-17 06:57:33.936029 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 06:57:33.936041 | orchestrator | 2026-02-17 06:57:33.936054 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-17 06:57:33.936066 | orchestrator | Tuesday 17 February 2026 06:57:03 +0000 (0:00:01.818) 1:10:18.378 ****** 2026-02-17 06:57:33.936079 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:57:33.936092 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:57:33.936104 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:57:33.936114 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:57:33.936125 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:57:33.936136 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-17 06:57:33.936147 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:57:33.936158 | orchestrator | 2026-02-17 06:57:33.936169 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-17 06:57:33.936180 | orchestrator | Tuesday 17 February 2026 06:57:05 +0000 (0:00:01.894) 1:10:20.273 ****** 2026-02-17 06:57:33.936190 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-17 06:57:33.936202 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-17 06:57:33.936213 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-17 06:57:33.936224 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-17 06:57:33.936235 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-17 06:57:33.936246 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-17 06:57:33.936256 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-17 06:57:33.936288 | orchestrator | 2026-02-17 06:57:33.936299 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-17 06:57:33.936310 | orchestrator | Tuesday 17 February 2026 06:57:07 +0000 (0:00:02.294) 1:10:22.567 ****** 2026-02-17 06:57:33.936321 | orchestrator | changed: [testbed-node-5] 2026-02-17 06:57:33.936332 | orchestrator | 2026-02-17 06:57:33.936362 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-17 06:57:33.936373 | orchestrator | Tuesday 17 February 2026 06:57:09 +0000 (0:00:01.933) 1:10:24.501 ****** 2026-02-17 06:57:33.936385 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:57:33.936397 | orchestrator | 2026-02-17 06:57:33.936408 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-17 06:57:33.936419 | orchestrator | Tuesday 17 February 2026 06:57:11 +0000 (0:00:02.491) 1:10:26.993 ****** 2026-02-17 06:57:33.936430 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:57:33.936441 | orchestrator | 2026-02-17 06:57:33.936452 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 06:57:33.936463 | orchestrator | Tuesday 17 February 2026 06:57:13 +0000 (0:00:02.018) 1:10:29.011 ****** 2026-02-17 06:57:33.936474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-17 06:57:33.936486 | orchestrator | 2026-02-17 06:57:33.936497 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 06:57:33.936508 | orchestrator | Tuesday 17 February 2026 06:57:15 +0000 (0:00:01.269) 1:10:30.280 ****** 2026-02-17 06:57:33.936519 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-17 06:57:33.936538 | orchestrator | 2026-02-17 06:57:33.936549 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 06:57:33.936560 | orchestrator | Tuesday 17 February 2026 06:57:16 +0000 (0:00:01.132) 1:10:31.413 ****** 2026-02-17 06:57:33.936570 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.936581 | orchestrator | 2026-02-17 06:57:33.936592 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 06:57:33.936603 | orchestrator | Tuesday 17 February 2026 06:57:17 +0000 (0:00:01.165) 1:10:32.578 ****** 2026-02-17 06:57:33.936614 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.936625 | orchestrator | 2026-02-17 06:57:33.936643 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 06:57:33.936654 | orchestrator | Tuesday 17 February 2026 06:57:18 +0000 (0:00:01.530) 1:10:34.109 ****** 2026-02-17 06:57:33.936665 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.936676 | orchestrator | 2026-02-17 06:57:33.936687 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 06:57:33.936698 | orchestrator | Tuesday 17 February 2026 06:57:20 +0000 (0:00:01.999) 1:10:36.109 ****** 2026-02-17 06:57:33.936709 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.936720 | orchestrator | 2026-02-17 06:57:33.936730 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 06:57:33.936741 | orchestrator | Tuesday 17 February 2026 06:57:22 +0000 (0:00:01.552) 1:10:37.661 ****** 2026-02-17 06:57:33.936752 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.936763 | orchestrator | 2026-02-17 06:57:33.936774 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 06:57:33.936785 | orchestrator | Tuesday 17 February 2026 06:57:23 +0000 (0:00:01.203) 1:10:38.865 ****** 2026-02-17 06:57:33.936796 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.936807 | orchestrator | 2026-02-17 06:57:33.936818 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 06:57:33.936829 | orchestrator | Tuesday 17 February 2026 06:57:24 +0000 (0:00:01.243) 1:10:40.108 ****** 2026-02-17 06:57:33.936840 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.936851 | orchestrator | 2026-02-17 06:57:33.936862 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 06:57:33.936873 | orchestrator | Tuesday 17 February 2026 06:57:25 +0000 (0:00:01.150) 1:10:41.259 ****** 2026-02-17 06:57:33.936884 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.936895 | orchestrator | 2026-02-17 06:57:33.936905 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 06:57:33.936916 | orchestrator | Tuesday 17 February 2026 06:57:27 +0000 (0:00:01.587) 1:10:42.846 ****** 2026-02-17 06:57:33.936927 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.936938 | orchestrator | 2026-02-17 06:57:33.936949 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 06:57:33.936960 | orchestrator | Tuesday 17 February 2026 06:57:29 +0000 (0:00:01.545) 1:10:44.392 ****** 2026-02-17 06:57:33.936970 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.936981 | orchestrator | 2026-02-17 06:57:33.936992 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 06:57:33.937003 | orchestrator | Tuesday 17 February 2026 06:57:29 +0000 (0:00:00.795) 1:10:45.187 ****** 2026-02-17 06:57:33.937014 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.937025 | orchestrator | 2026-02-17 06:57:33.937036 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 06:57:33.937047 | orchestrator | Tuesday 17 February 2026 06:57:30 +0000 (0:00:00.837) 1:10:46.025 ****** 2026-02-17 06:57:33.937058 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.937069 | orchestrator | 2026-02-17 06:57:33.937079 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 06:57:33.937090 | orchestrator | Tuesday 17 February 2026 06:57:31 +0000 (0:00:00.769) 1:10:46.795 ****** 2026-02-17 06:57:33.937107 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.937118 | orchestrator | 2026-02-17 06:57:33.937130 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 06:57:33.937140 | orchestrator | Tuesday 17 February 2026 06:57:32 +0000 (0:00:00.796) 1:10:47.591 ****** 2026-02-17 06:57:33.937151 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:57:33.937162 | orchestrator | 2026-02-17 06:57:33.937173 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 06:57:33.937184 | orchestrator | Tuesday 17 February 2026 06:57:33 +0000 (0:00:00.831) 1:10:48.422 ****** 2026-02-17 06:57:33.937195 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:57:33.937206 | orchestrator | 2026-02-17 06:57:33.937223 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 06:58:15.192346 | orchestrator | Tuesday 17 February 2026 06:57:33 +0000 (0:00:00.771) 1:10:49.193 ****** 2026-02-17 06:58:15.192478 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.192497 | orchestrator | 2026-02-17 06:58:15.192510 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 06:58:15.192522 | orchestrator | Tuesday 17 February 2026 06:57:34 +0000 (0:00:00.887) 1:10:50.081 ****** 2026-02-17 06:58:15.192533 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.192545 | orchestrator | 2026-02-17 06:58:15.192557 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 06:58:15.192568 | orchestrator | Tuesday 17 February 2026 06:57:35 +0000 (0:00:00.781) 1:10:50.863 ****** 2026-02-17 06:58:15.192579 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:15.192591 | orchestrator | 2026-02-17 06:58:15.192602 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 06:58:15.192614 | orchestrator | Tuesday 17 February 2026 06:57:36 +0000 (0:00:00.789) 1:10:51.652 ****** 2026-02-17 06:58:15.192625 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:15.192636 | orchestrator | 2026-02-17 06:58:15.192647 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-17 06:58:15.192658 | orchestrator | Tuesday 17 February 2026 06:57:37 +0000 (0:00:00.861) 1:10:52.514 ****** 2026-02-17 06:58:15.192669 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.192680 | orchestrator | 2026-02-17 06:58:15.192691 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-17 06:58:15.192703 | orchestrator | Tuesday 17 February 2026 06:57:38 +0000 (0:00:00.793) 1:10:53.308 ****** 2026-02-17 06:58:15.192714 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.192725 | orchestrator | 2026-02-17 06:58:15.192736 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-17 06:58:15.192747 | orchestrator | Tuesday 17 February 2026 06:57:38 +0000 (0:00:00.782) 1:10:54.090 ****** 2026-02-17 06:58:15.192758 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.192770 | orchestrator | 2026-02-17 06:58:15.192781 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-17 06:58:15.192792 | orchestrator | Tuesday 17 February 2026 06:57:39 +0000 (0:00:00.778) 1:10:54.868 ****** 2026-02-17 06:58:15.192803 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.192814 | orchestrator | 2026-02-17 06:58:15.192843 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-17 06:58:15.192857 | orchestrator | Tuesday 17 February 2026 06:57:40 +0000 (0:00:00.808) 1:10:55.677 ****** 2026-02-17 06:58:15.192870 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.192882 | orchestrator | 2026-02-17 06:58:15.192895 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-17 06:58:15.192907 | orchestrator | Tuesday 17 February 2026 06:57:41 +0000 (0:00:00.777) 1:10:56.454 ****** 2026-02-17 06:58:15.192920 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.192932 | orchestrator | 2026-02-17 06:58:15.192945 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-17 06:58:15.192957 | orchestrator | Tuesday 17 February 2026 06:57:41 +0000 (0:00:00.754) 1:10:57.209 ****** 2026-02-17 06:58:15.192989 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193003 | orchestrator | 2026-02-17 06:58:15.193016 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-17 06:58:15.193030 | orchestrator | Tuesday 17 February 2026 06:57:42 +0000 (0:00:00.769) 1:10:57.979 ****** 2026-02-17 06:58:15.193042 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193055 | orchestrator | 2026-02-17 06:58:15.193071 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-17 06:58:15.193084 | orchestrator | Tuesday 17 February 2026 06:57:43 +0000 (0:00:00.790) 1:10:58.770 ****** 2026-02-17 06:58:15.193096 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193109 | orchestrator | 2026-02-17 06:58:15.193122 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-17 06:58:15.193134 | orchestrator | Tuesday 17 February 2026 06:57:44 +0000 (0:00:00.799) 1:10:59.569 ****** 2026-02-17 06:58:15.193146 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193159 | orchestrator | 2026-02-17 06:58:15.193171 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-17 06:58:15.193184 | orchestrator | Tuesday 17 February 2026 06:57:45 +0000 (0:00:00.872) 1:11:00.441 ****** 2026-02-17 06:58:15.193196 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193208 | orchestrator | 2026-02-17 06:58:15.193219 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-17 06:58:15.193229 | orchestrator | Tuesday 17 February 2026 06:57:45 +0000 (0:00:00.760) 1:11:01.202 ****** 2026-02-17 06:58:15.193241 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193280 | orchestrator | 2026-02-17 06:58:15.193295 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-17 06:58:15.193306 | orchestrator | Tuesday 17 February 2026 06:57:46 +0000 (0:00:00.798) 1:11:02.001 ****** 2026-02-17 06:58:15.193317 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:15.193328 | orchestrator | 2026-02-17 06:58:15.193339 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-17 06:58:15.193351 | orchestrator | Tuesday 17 February 2026 06:57:48 +0000 (0:00:01.605) 1:11:03.606 ****** 2026-02-17 06:58:15.193362 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:15.193373 | orchestrator | 2026-02-17 06:58:15.193384 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-17 06:58:15.193395 | orchestrator | Tuesday 17 February 2026 06:57:50 +0000 (0:00:01.913) 1:11:05.519 ****** 2026-02-17 06:58:15.193406 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-17 06:58:15.193418 | orchestrator | 2026-02-17 06:58:15.193429 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-17 06:58:15.193440 | orchestrator | Tuesday 17 February 2026 06:57:51 +0000 (0:00:01.141) 1:11:06.661 ****** 2026-02-17 06:58:15.193451 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193462 | orchestrator | 2026-02-17 06:58:15.193474 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-17 06:58:15.193503 | orchestrator | Tuesday 17 February 2026 06:57:52 +0000 (0:00:01.211) 1:11:07.873 ****** 2026-02-17 06:58:15.193516 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193527 | orchestrator | 2026-02-17 06:58:15.193538 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-17 06:58:15.193549 | orchestrator | Tuesday 17 February 2026 06:57:53 +0000 (0:00:01.200) 1:11:09.074 ****** 2026-02-17 06:58:15.193560 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-17 06:58:15.193571 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-17 06:58:15.193582 | orchestrator | 2026-02-17 06:58:15.193594 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-17 06:58:15.193605 | orchestrator | Tuesday 17 February 2026 06:57:55 +0000 (0:00:01.842) 1:11:10.916 ****** 2026-02-17 06:58:15.193616 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:15.193636 | orchestrator | 2026-02-17 06:58:15.193647 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-17 06:58:15.193658 | orchestrator | Tuesday 17 February 2026 06:57:57 +0000 (0:00:01.500) 1:11:12.416 ****** 2026-02-17 06:58:15.193669 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193680 | orchestrator | 2026-02-17 06:58:15.193691 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-17 06:58:15.193703 | orchestrator | Tuesday 17 February 2026 06:57:58 +0000 (0:00:01.154) 1:11:13.570 ****** 2026-02-17 06:58:15.193714 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193737 | orchestrator | 2026-02-17 06:58:15.193749 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-17 06:58:15.193760 | orchestrator | Tuesday 17 February 2026 06:57:59 +0000 (0:00:00.911) 1:11:14.482 ****** 2026-02-17 06:58:15.193771 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193782 | orchestrator | 2026-02-17 06:58:15.193793 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-17 06:58:15.193804 | orchestrator | Tuesday 17 February 2026 06:58:00 +0000 (0:00:00.797) 1:11:15.280 ****** 2026-02-17 06:58:15.193815 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-17 06:58:15.193826 | orchestrator | 2026-02-17 06:58:15.193843 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-17 06:58:15.193855 | orchestrator | Tuesday 17 February 2026 06:58:01 +0000 (0:00:01.154) 1:11:16.434 ****** 2026-02-17 06:58:15.193866 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:15.193877 | orchestrator | 2026-02-17 06:58:15.193889 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-17 06:58:15.193900 | orchestrator | Tuesday 17 February 2026 06:58:02 +0000 (0:00:01.780) 1:11:18.215 ****** 2026-02-17 06:58:15.193911 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-17 06:58:15.193921 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-17 06:58:15.193932 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-17 06:58:15.193943 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193954 | orchestrator | 2026-02-17 06:58:15.193965 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-17 06:58:15.193976 | orchestrator | Tuesday 17 February 2026 06:58:04 +0000 (0:00:01.226) 1:11:19.442 ****** 2026-02-17 06:58:15.193987 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.193998 | orchestrator | 2026-02-17 06:58:15.194010 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-17 06:58:15.194079 | orchestrator | Tuesday 17 February 2026 06:58:05 +0000 (0:00:01.139) 1:11:20.581 ****** 2026-02-17 06:58:15.194091 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.194102 | orchestrator | 2026-02-17 06:58:15.194113 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-17 06:58:15.194124 | orchestrator | Tuesday 17 February 2026 06:58:06 +0000 (0:00:01.217) 1:11:21.799 ****** 2026-02-17 06:58:15.194135 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.194146 | orchestrator | 2026-02-17 06:58:15.194157 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-17 06:58:15.194168 | orchestrator | Tuesday 17 February 2026 06:58:07 +0000 (0:00:01.131) 1:11:22.930 ****** 2026-02-17 06:58:15.194179 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.194191 | orchestrator | 2026-02-17 06:58:15.194201 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-17 06:58:15.194212 | orchestrator | Tuesday 17 February 2026 06:58:08 +0000 (0:00:01.167) 1:11:24.098 ****** 2026-02-17 06:58:15.194223 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.194234 | orchestrator | 2026-02-17 06:58:15.194245 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-17 06:58:15.194308 | orchestrator | Tuesday 17 February 2026 06:58:09 +0000 (0:00:00.813) 1:11:24.911 ****** 2026-02-17 06:58:15.194328 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:15.194339 | orchestrator | 2026-02-17 06:58:15.194350 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-17 06:58:15.194361 | orchestrator | Tuesday 17 February 2026 06:58:11 +0000 (0:00:02.095) 1:11:27.006 ****** 2026-02-17 06:58:15.194372 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:15.194383 | orchestrator | 2026-02-17 06:58:15.194394 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-17 06:58:15.194405 | orchestrator | Tuesday 17 February 2026 06:58:12 +0000 (0:00:00.813) 1:11:27.820 ****** 2026-02-17 06:58:15.194416 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-17 06:58:15.194427 | orchestrator | 2026-02-17 06:58:15.194438 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-17 06:58:15.194449 | orchestrator | Tuesday 17 February 2026 06:58:13 +0000 (0:00:01.314) 1:11:29.134 ****** 2026-02-17 06:58:15.194460 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:15.194471 | orchestrator | 2026-02-17 06:58:15.194481 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-17 06:58:15.194502 | orchestrator | Tuesday 17 February 2026 06:58:15 +0000 (0:00:01.312) 1:11:30.447 ****** 2026-02-17 06:58:56.997988 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.998158 | orchestrator | 2026-02-17 06:58:56.998195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-17 06:58:56.998219 | orchestrator | Tuesday 17 February 2026 06:58:16 +0000 (0:00:01.239) 1:11:31.686 ****** 2026-02-17 06:58:56.998254 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.998266 | orchestrator | 2026-02-17 06:58:56.998278 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-17 06:58:56.998290 | orchestrator | Tuesday 17 February 2026 06:58:17 +0000 (0:00:01.208) 1:11:32.895 ****** 2026-02-17 06:58:56.998302 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.998313 | orchestrator | 2026-02-17 06:58:56.998324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-17 06:58:56.998336 | orchestrator | Tuesday 17 February 2026 06:58:18 +0000 (0:00:01.130) 1:11:34.025 ****** 2026-02-17 06:58:56.998347 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.998358 | orchestrator | 2026-02-17 06:58:56.998370 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-17 06:58:56.998381 | orchestrator | Tuesday 17 February 2026 06:58:19 +0000 (0:00:01.236) 1:11:35.262 ****** 2026-02-17 06:58:56.998392 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.998404 | orchestrator | 2026-02-17 06:58:56.998415 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-17 06:58:56.998426 | orchestrator | Tuesday 17 February 2026 06:58:21 +0000 (0:00:01.204) 1:11:36.467 ****** 2026-02-17 06:58:56.998437 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.998448 | orchestrator | 2026-02-17 06:58:56.998459 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-17 06:58:56.998471 | orchestrator | Tuesday 17 February 2026 06:58:22 +0000 (0:00:01.171) 1:11:37.639 ****** 2026-02-17 06:58:56.998482 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.998493 | orchestrator | 2026-02-17 06:58:56.998504 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-17 06:58:56.998515 | orchestrator | Tuesday 17 February 2026 06:58:23 +0000 (0:00:01.220) 1:11:38.859 ****** 2026-02-17 06:58:56.998528 | orchestrator | ok: [testbed-node-5] 2026-02-17 06:58:56.998542 | orchestrator | 2026-02-17 06:58:56.998570 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-17 06:58:56.998584 | orchestrator | Tuesday 17 February 2026 06:58:24 +0000 (0:00:00.817) 1:11:39.676 ****** 2026-02-17 06:58:56.998595 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-17 06:58:56.998608 | orchestrator | 2026-02-17 06:58:56.998641 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-17 06:58:56.998653 | orchestrator | Tuesday 17 February 2026 06:58:25 +0000 (0:00:01.313) 1:11:40.990 ****** 2026-02-17 06:58:56.998665 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-17 06:58:56.998676 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-17 06:58:56.998687 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-17 06:58:56.998698 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-17 06:58:56.998709 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-17 06:58:56.998720 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-17 06:58:56.998731 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-17 06:58:56.998741 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-17 06:58:56.998753 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-17 06:58:56.998764 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-17 06:58:56.998775 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-17 06:58:56.998786 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-17 06:58:56.998797 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-17 06:58:56.998808 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-17 06:58:56.998819 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-17 06:58:56.998830 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-17 06:58:56.998841 | orchestrator | 2026-02-17 06:58:56.998852 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-17 06:58:56.998864 | orchestrator | Tuesday 17 February 2026 06:58:31 +0000 (0:00:06.273) 1:11:47.263 ****** 2026-02-17 06:58:56.998875 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-17 06:58:56.998886 | orchestrator | 2026-02-17 06:58:56.998897 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-17 06:58:56.998908 | orchestrator | Tuesday 17 February 2026 06:58:33 +0000 (0:00:01.203) 1:11:48.467 ****** 2026-02-17 06:58:56.998920 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:58:56.998932 | orchestrator | 2026-02-17 06:58:56.998944 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-17 06:58:56.998955 | orchestrator | Tuesday 17 February 2026 06:58:34 +0000 (0:00:01.507) 1:11:49.974 ****** 2026-02-17 06:58:56.998967 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:58:56.998978 | orchestrator | 2026-02-17 06:58:56.998989 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-17 06:58:56.999000 | orchestrator | Tuesday 17 February 2026 06:58:36 +0000 (0:00:01.631) 1:11:51.605 ****** 2026-02-17 06:58:56.999011 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999022 | orchestrator | 2026-02-17 06:58:56.999034 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-17 06:58:56.999062 | orchestrator | Tuesday 17 February 2026 06:58:37 +0000 (0:00:00.782) 1:11:52.388 ****** 2026-02-17 06:58:56.999073 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999084 | orchestrator | 2026-02-17 06:58:56.999095 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-17 06:58:56.999107 | orchestrator | Tuesday 17 February 2026 06:58:37 +0000 (0:00:00.780) 1:11:53.168 ****** 2026-02-17 06:58:56.999117 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999128 | orchestrator | 2026-02-17 06:58:56.999139 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-17 06:58:56.999151 | orchestrator | Tuesday 17 February 2026 06:58:38 +0000 (0:00:00.791) 1:11:53.960 ****** 2026-02-17 06:58:56.999170 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999182 | orchestrator | 2026-02-17 06:58:56.999193 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-17 06:58:56.999204 | orchestrator | Tuesday 17 February 2026 06:58:39 +0000 (0:00:00.821) 1:11:54.782 ****** 2026-02-17 06:58:56.999215 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999226 | orchestrator | 2026-02-17 06:58:56.999269 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-17 06:58:56.999281 | orchestrator | Tuesday 17 February 2026 06:58:40 +0000 (0:00:00.886) 1:11:55.669 ****** 2026-02-17 06:58:56.999292 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999303 | orchestrator | 2026-02-17 06:58:56.999314 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-17 06:58:56.999325 | orchestrator | Tuesday 17 February 2026 06:58:41 +0000 (0:00:00.782) 1:11:56.451 ****** 2026-02-17 06:58:56.999336 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999347 | orchestrator | 2026-02-17 06:58:56.999357 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-17 06:58:56.999368 | orchestrator | Tuesday 17 February 2026 06:58:41 +0000 (0:00:00.783) 1:11:57.235 ****** 2026-02-17 06:58:56.999379 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999390 | orchestrator | 2026-02-17 06:58:56.999401 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-17 06:58:56.999417 | orchestrator | Tuesday 17 February 2026 06:58:42 +0000 (0:00:00.877) 1:11:58.113 ****** 2026-02-17 06:58:56.999429 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999440 | orchestrator | 2026-02-17 06:58:56.999450 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-17 06:58:56.999461 | orchestrator | Tuesday 17 February 2026 06:58:43 +0000 (0:00:00.807) 1:11:58.921 ****** 2026-02-17 06:58:56.999472 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999483 | orchestrator | 2026-02-17 06:58:56.999494 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-17 06:58:56.999505 | orchestrator | Tuesday 17 February 2026 06:58:44 +0000 (0:00:00.827) 1:11:59.749 ****** 2026-02-17 06:58:56.999516 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999527 | orchestrator | 2026-02-17 06:58:56.999538 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-17 06:58:56.999549 | orchestrator | Tuesday 17 February 2026 06:58:45 +0000 (0:00:00.811) 1:12:00.560 ****** 2026-02-17 06:58:56.999561 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-17 06:58:56.999572 | orchestrator | 2026-02-17 06:58:56.999583 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-17 06:58:56.999594 | orchestrator | Tuesday 17 February 2026 06:58:49 +0000 (0:00:04.060) 1:12:04.621 ****** 2026-02-17 06:58:56.999605 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 06:58:56.999616 | orchestrator | 2026-02-17 06:58:56.999627 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-17 06:58:56.999637 | orchestrator | Tuesday 17 February 2026 06:58:50 +0000 (0:00:00.822) 1:12:05.443 ****** 2026-02-17 06:58:56.999651 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-17 06:58:56.999665 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-17 06:58:56.999686 | orchestrator | 2026-02-17 06:58:56.999697 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-17 06:58:56.999708 | orchestrator | Tuesday 17 February 2026 06:58:54 +0000 (0:00:04.441) 1:12:09.885 ****** 2026-02-17 06:58:56.999719 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999730 | orchestrator | 2026-02-17 06:58:56.999741 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-17 06:58:56.999752 | orchestrator | Tuesday 17 February 2026 06:58:55 +0000 (0:00:00.785) 1:12:10.671 ****** 2026-02-17 06:58:56.999763 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999774 | orchestrator | 2026-02-17 06:58:56.999785 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-17 06:58:56.999796 | orchestrator | Tuesday 17 February 2026 06:58:56 +0000 (0:00:00.780) 1:12:11.451 ****** 2026-02-17 06:58:56.999807 | orchestrator | skipping: [testbed-node-5] 2026-02-17 06:58:56.999818 | orchestrator | 2026-02-17 06:58:56.999829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-17 06:58:56.999848 | orchestrator | Tuesday 17 February 2026 06:58:56 +0000 (0:00:00.802) 1:12:12.254 ****** 2026-02-17 07:00:02.284467 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.284581 | orchestrator | 2026-02-17 07:00:02.284598 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-17 07:00:02.284611 | orchestrator | Tuesday 17 February 2026 06:58:57 +0000 (0:00:00.890) 1:12:13.145 ****** 2026-02-17 07:00:02.284621 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.284632 | orchestrator | 2026-02-17 07:00:02.284642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-17 07:00:02.284653 | orchestrator | Tuesday 17 February 2026 06:58:58 +0000 (0:00:00.872) 1:12:14.018 ****** 2026-02-17 07:00:02.284679 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:02.284691 | orchestrator | 2026-02-17 07:00:02.284711 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-17 07:00:02.284723 | orchestrator | Tuesday 17 February 2026 06:58:59 +0000 (0:00:00.883) 1:12:14.902 ****** 2026-02-17 07:00:02.284734 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 07:00:02.284746 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 07:00:02.284758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 07:00:02.284770 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.284781 | orchestrator | 2026-02-17 07:00:02.284792 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-17 07:00:02.284803 | orchestrator | Tuesday 17 February 2026 06:59:01 +0000 (0:00:01.467) 1:12:16.369 ****** 2026-02-17 07:00:02.284814 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 07:00:02.284825 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 07:00:02.284836 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 07:00:02.284848 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.284860 | orchestrator | 2026-02-17 07:00:02.284871 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-17 07:00:02.284883 | orchestrator | Tuesday 17 February 2026 06:59:02 +0000 (0:00:01.563) 1:12:17.933 ****** 2026-02-17 07:00:02.284913 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-17 07:00:02.284924 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-17 07:00:02.284934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-17 07:00:02.284945 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.284956 | orchestrator | 2026-02-17 07:00:02.284968 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-17 07:00:02.284980 | orchestrator | Tuesday 17 February 2026 06:59:03 +0000 (0:00:01.079) 1:12:19.013 ****** 2026-02-17 07:00:02.284993 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:02.285027 | orchestrator | 2026-02-17 07:00:02.285038 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-17 07:00:02.285048 | orchestrator | Tuesday 17 February 2026 06:59:04 +0000 (0:00:00.796) 1:12:19.809 ****** 2026-02-17 07:00:02.285059 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-17 07:00:02.285070 | orchestrator | 2026-02-17 07:00:02.285082 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-17 07:00:02.285093 | orchestrator | Tuesday 17 February 2026 06:59:05 +0000 (0:00:01.024) 1:12:20.834 ****** 2026-02-17 07:00:02.285105 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:02.285116 | orchestrator | 2026-02-17 07:00:02.285127 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-17 07:00:02.285139 | orchestrator | Tuesday 17 February 2026 06:59:06 +0000 (0:00:01.381) 1:12:22.215 ****** 2026-02-17 07:00:02.285150 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-17 07:00:02.285162 | orchestrator | 2026-02-17 07:00:02.285176 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-17 07:00:02.285189 | orchestrator | Tuesday 17 February 2026 06:59:08 +0000 (0:00:01.147) 1:12:23.363 ****** 2026-02-17 07:00:02.285202 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 07:00:02.285239 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-17 07:00:02.285250 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 07:00:02.285262 | orchestrator | 2026-02-17 07:00:02.285273 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-17 07:00:02.285285 | orchestrator | Tuesday 17 February 2026 06:59:11 +0000 (0:00:03.093) 1:12:26.457 ****** 2026-02-17 07:00:02.285297 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-17 07:00:02.285307 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-17 07:00:02.285317 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:02.285328 | orchestrator | 2026-02-17 07:00:02.285339 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-17 07:00:02.285348 | orchestrator | Tuesday 17 February 2026 06:59:13 +0000 (0:00:02.011) 1:12:28.468 ****** 2026-02-17 07:00:02.285357 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.285366 | orchestrator | 2026-02-17 07:00:02.285412 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-17 07:00:02.285437 | orchestrator | Tuesday 17 February 2026 06:59:13 +0000 (0:00:00.774) 1:12:29.243 ****** 2026-02-17 07:00:02.285447 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-17 07:00:02.285458 | orchestrator | 2026-02-17 07:00:02.285468 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-17 07:00:02.285478 | orchestrator | Tuesday 17 February 2026 06:59:15 +0000 (0:00:01.343) 1:12:30.587 ****** 2026-02-17 07:00:02.285490 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 07:00:02.285502 | orchestrator | 2026-02-17 07:00:02.285513 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-17 07:00:02.285524 | orchestrator | Tuesday 17 February 2026 06:59:16 +0000 (0:00:01.649) 1:12:32.236 ****** 2026-02-17 07:00:02.285556 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 07:00:02.285567 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-17 07:00:02.285578 | orchestrator | 2026-02-17 07:00:02.285588 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-17 07:00:02.285599 | orchestrator | Tuesday 17 February 2026 06:59:21 +0000 (0:00:04.933) 1:12:37.170 ****** 2026-02-17 07:00:02.285609 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-17 07:00:02.285621 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-17 07:00:02.285645 | orchestrator | 2026-02-17 07:00:02.285655 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-17 07:00:02.285665 | orchestrator | Tuesday 17 February 2026 06:59:24 +0000 (0:00:03.026) 1:12:40.197 ****** 2026-02-17 07:00:02.285676 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-17 07:00:02.285687 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:02.285697 | orchestrator | 2026-02-17 07:00:02.285708 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-17 07:00:02.285719 | orchestrator | Tuesday 17 February 2026 06:59:26 +0000 (0:00:01.651) 1:12:41.848 ****** 2026-02-17 07:00:02.285729 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-17 07:00:02.285739 | orchestrator | 2026-02-17 07:00:02.285750 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-17 07:00:02.285760 | orchestrator | Tuesday 17 February 2026 06:59:27 +0000 (0:00:01.209) 1:12:43.057 ****** 2026-02-17 07:00:02.285771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285834 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.285844 | orchestrator | 2026-02-17 07:00:02.285855 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-17 07:00:02.285865 | orchestrator | Tuesday 17 February 2026 06:59:29 +0000 (0:00:01.675) 1:12:44.732 ****** 2026-02-17 07:00:02.285876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-17 07:00:02.285927 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.285938 | orchestrator | 2026-02-17 07:00:02.285949 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-17 07:00:02.285959 | orchestrator | Tuesday 17 February 2026 06:59:31 +0000 (0:00:01.578) 1:12:46.311 ****** 2026-02-17 07:00:02.285969 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 07:00:02.285980 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 07:00:02.285991 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 07:00:02.286002 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 07:00:02.286088 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-17 07:00:02.286101 | orchestrator | 2026-02-17 07:00:02.286111 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-17 07:00:02.286121 | orchestrator | Tuesday 17 February 2026 07:00:01 +0000 (0:00:30.299) 1:13:16.611 ****** 2026-02-17 07:00:02.286132 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:02.286142 | orchestrator | 2026-02-17 07:00:02.286153 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-17 07:00:02.286174 | orchestrator | Tuesday 17 February 2026 07:00:02 +0000 (0:00:00.930) 1:13:17.542 ****** 2026-02-17 07:00:55.414009 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:55.414182 | orchestrator | 2026-02-17 07:00:55.414239 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-17 07:00:55.414253 | orchestrator | Tuesday 17 February 2026 07:00:03 +0000 (0:00:00.810) 1:13:18.352 ****** 2026-02-17 07:00:55.414265 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-17 07:00:55.414276 | orchestrator | 2026-02-17 07:00:55.414288 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-17 07:00:55.414299 | orchestrator | Tuesday 17 February 2026 07:00:04 +0000 (0:00:01.309) 1:13:19.662 ****** 2026-02-17 07:00:55.414310 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-17 07:00:55.414321 | orchestrator | 2026-02-17 07:00:55.414332 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-17 07:00:55.414344 | orchestrator | Tuesday 17 February 2026 07:00:05 +0000 (0:00:01.145) 1:13:20.808 ****** 2026-02-17 07:00:55.414355 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.414367 | orchestrator | 2026-02-17 07:00:55.414378 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-17 07:00:55.414389 | orchestrator | Tuesday 17 February 2026 07:00:07 +0000 (0:00:02.044) 1:13:22.852 ****** 2026-02-17 07:00:55.414400 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.414411 | orchestrator | 2026-02-17 07:00:55.414422 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-17 07:00:55.414434 | orchestrator | Tuesday 17 February 2026 07:00:09 +0000 (0:00:01.966) 1:13:24.819 ****** 2026-02-17 07:00:55.414445 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.414457 | orchestrator | 2026-02-17 07:00:55.414475 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-17 07:00:55.414493 | orchestrator | Tuesday 17 February 2026 07:00:11 +0000 (0:00:02.179) 1:13:26.999 ****** 2026-02-17 07:00:55.414531 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-17 07:00:55.414553 | orchestrator | 2026-02-17 07:00:55.414571 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-17 07:00:55.414590 | orchestrator | skipping: no hosts matched 2026-02-17 07:00:55.414608 | orchestrator | 2026-02-17 07:00:55.414627 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-17 07:00:55.414647 | orchestrator | skipping: no hosts matched 2026-02-17 07:00:55.414667 | orchestrator | 2026-02-17 07:00:55.414688 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-17 07:00:55.414707 | orchestrator | skipping: no hosts matched 2026-02-17 07:00:55.414727 | orchestrator | 2026-02-17 07:00:55.414741 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-17 07:00:55.414754 | orchestrator | 2026-02-17 07:00:55.414771 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-17 07:00:55.414791 | orchestrator | Tuesday 17 February 2026 07:00:15 +0000 (0:00:04.202) 1:13:31.201 ****** 2026-02-17 07:00:55.414810 | orchestrator | changed: [testbed-node-0] 2026-02-17 07:00:55.414828 | orchestrator | changed: [testbed-node-1] 2026-02-17 07:00:55.414847 | orchestrator | changed: [testbed-node-2] 2026-02-17 07:00:55.414895 | orchestrator | changed: [testbed-node-3] 2026-02-17 07:00:55.414913 | orchestrator | changed: [testbed-node-4] 2026-02-17 07:00:55.414930 | orchestrator | changed: [testbed-node-5] 2026-02-17 07:00:55.414976 | orchestrator | 2026-02-17 07:00:55.414994 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-17 07:00:55.415013 | orchestrator | Tuesday 17 February 2026 07:00:18 +0000 (0:00:02.983) 1:13:34.184 ****** 2026-02-17 07:00:55.415031 | orchestrator | changed: [testbed-node-3] 2026-02-17 07:00:55.415051 | orchestrator | changed: [testbed-node-1] 2026-02-17 07:00:55.415070 | orchestrator | changed: [testbed-node-2] 2026-02-17 07:00:55.415089 | orchestrator | changed: [testbed-node-4] 2026-02-17 07:00:55.415108 | orchestrator | changed: [testbed-node-5] 2026-02-17 07:00:55.415126 | orchestrator | changed: [testbed-node-0] 2026-02-17 07:00:55.415146 | orchestrator | 2026-02-17 07:00:55.415164 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 07:00:55.415223 | orchestrator | Tuesday 17 February 2026 07:00:22 +0000 (0:00:03.687) 1:13:37.871 ****** 2026-02-17 07:00:55.415246 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:00:55.415266 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:00:55.415285 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:00:55.415303 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:00:55.415321 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:00:55.415339 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.415357 | orchestrator | 2026-02-17 07:00:55.415376 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 07:00:55.415390 | orchestrator | Tuesday 17 February 2026 07:00:25 +0000 (0:00:02.719) 1:13:40.591 ****** 2026-02-17 07:00:55.415401 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:00:55.415412 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:00:55.415423 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:00:55.415434 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:00:55.415445 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:00:55.415456 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.415467 | orchestrator | 2026-02-17 07:00:55.415478 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-17 07:00:55.415489 | orchestrator | Tuesday 17 February 2026 07:00:27 +0000 (0:00:02.205) 1:13:42.797 ****** 2026-02-17 07:00:55.415501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 07:00:55.415514 | orchestrator | 2026-02-17 07:00:55.415525 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-17 07:00:55.415536 | orchestrator | Tuesday 17 February 2026 07:00:29 +0000 (0:00:02.262) 1:13:45.059 ****** 2026-02-17 07:00:55.415547 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 07:00:55.415558 | orchestrator | 2026-02-17 07:00:55.415593 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-17 07:00:55.415605 | orchestrator | Tuesday 17 February 2026 07:00:31 +0000 (0:00:02.193) 1:13:47.253 ****** 2026-02-17 07:00:55.415616 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:00:55.415627 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:00:55.415639 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:00:55.415650 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:00:55.415661 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:55.415671 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:00:55.415683 | orchestrator | 2026-02-17 07:00:55.415694 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-17 07:00:55.415705 | orchestrator | Tuesday 17 February 2026 07:00:34 +0000 (0:00:02.023) 1:13:49.277 ****** 2026-02-17 07:00:55.415715 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:00:55.415727 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:00:55.415738 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:00:55.415761 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:00:55.415773 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:00:55.415783 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.415794 | orchestrator | 2026-02-17 07:00:55.415805 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-17 07:00:55.415816 | orchestrator | Tuesday 17 February 2026 07:00:36 +0000 (0:00:02.206) 1:13:51.483 ****** 2026-02-17 07:00:55.415827 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:00:55.415838 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:00:55.415849 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:00:55.415860 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:00:55.415871 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:00:55.415882 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.415893 | orchestrator | 2026-02-17 07:00:55.415904 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-17 07:00:55.415915 | orchestrator | Tuesday 17 February 2026 07:00:38 +0000 (0:00:02.194) 1:13:53.678 ****** 2026-02-17 07:00:55.415926 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:00:55.415937 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:00:55.415948 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:00:55.415968 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:00:55.415979 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:00:55.415990 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.416000 | orchestrator | 2026-02-17 07:00:55.416011 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-17 07:00:55.416023 | orchestrator | Tuesday 17 February 2026 07:00:40 +0000 (0:00:02.196) 1:13:55.874 ****** 2026-02-17 07:00:55.416034 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:00:55.416045 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:00:55.416056 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:00:55.416067 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:00:55.416078 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:55.416089 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:00:55.416100 | orchestrator | 2026-02-17 07:00:55.416111 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-17 07:00:55.416122 | orchestrator | Tuesday 17 February 2026 07:00:42 +0000 (0:00:02.376) 1:13:58.251 ****** 2026-02-17 07:00:55.416133 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:00:55.416144 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:00:55.416155 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:00:55.416166 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:00:55.416177 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:00:55.416211 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:55.416223 | orchestrator | 2026-02-17 07:00:55.416234 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-17 07:00:55.416245 | orchestrator | Tuesday 17 February 2026 07:00:44 +0000 (0:00:01.780) 1:14:00.031 ****** 2026-02-17 07:00:55.416256 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:00:55.416267 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:00:55.416278 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:00:55.416289 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:00:55.416300 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:00:55.416310 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:55.416321 | orchestrator | 2026-02-17 07:00:55.416332 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-17 07:00:55.416344 | orchestrator | Tuesday 17 February 2026 07:00:46 +0000 (0:00:02.100) 1:14:02.131 ****** 2026-02-17 07:00:55.416354 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:00:55.416365 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:00:55.416376 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:00:55.416387 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:00:55.416398 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:00:55.416409 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.416420 | orchestrator | 2026-02-17 07:00:55.416431 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-17 07:00:55.416449 | orchestrator | Tuesday 17 February 2026 07:00:48 +0000 (0:00:02.121) 1:14:04.253 ****** 2026-02-17 07:00:55.416460 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:00:55.416471 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:00:55.416481 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:00:55.416492 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:00:55.416504 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:00:55.416521 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:00:55.416538 | orchestrator | 2026-02-17 07:00:55.416556 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-17 07:00:55.416574 | orchestrator | Tuesday 17 February 2026 07:00:51 +0000 (0:00:02.482) 1:14:06.735 ****** 2026-02-17 07:00:55.416592 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:00:55.416609 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:00:55.416621 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:00:55.416632 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:00:55.416642 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:00:55.416653 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:55.416665 | orchestrator | 2026-02-17 07:00:55.416676 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-17 07:00:55.416687 | orchestrator | Tuesday 17 February 2026 07:00:53 +0000 (0:00:01.776) 1:14:08.512 ****** 2026-02-17 07:00:55.416698 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:00:55.416708 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:00:55.416719 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:00:55.416730 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:00:55.416741 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:00:55.416753 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:00:55.416764 | orchestrator | 2026-02-17 07:00:55.416784 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-17 07:01:50.677863 | orchestrator | Tuesday 17 February 2026 07:00:55 +0000 (0:00:02.157) 1:14:10.669 ****** 2026-02-17 07:01:50.677982 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.678000 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.678012 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.678101 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:01:50.678114 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:01:50.678125 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:01:50.678136 | orchestrator | 2026-02-17 07:01:50.678148 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-17 07:01:50.678160 | orchestrator | Tuesday 17 February 2026 07:00:57 +0000 (0:00:01.751) 1:14:12.421 ****** 2026-02-17 07:01:50.678211 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.678231 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.678251 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.678270 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:01:50.678284 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:01:50.678295 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:01:50.678305 | orchestrator | 2026-02-17 07:01:50.678317 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-17 07:01:50.678328 | orchestrator | Tuesday 17 February 2026 07:00:58 +0000 (0:00:01.806) 1:14:14.227 ****** 2026-02-17 07:01:50.678339 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.678350 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.678361 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.678372 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:01:50.678383 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:01:50.678394 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:01:50.678407 | orchestrator | 2026-02-17 07:01:50.678420 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-17 07:01:50.678433 | orchestrator | Tuesday 17 February 2026 07:01:00 +0000 (0:00:01.847) 1:14:16.075 ****** 2026-02-17 07:01:50.678445 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.678458 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.678512 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.678525 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:01:50.678538 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:01:50.678550 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:01:50.678562 | orchestrator | 2026-02-17 07:01:50.678575 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-17 07:01:50.678588 | orchestrator | Tuesday 17 February 2026 07:01:02 +0000 (0:00:01.751) 1:14:17.827 ****** 2026-02-17 07:01:50.678600 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.678613 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.678624 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.678638 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:01:50.678650 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:01:50.678662 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:01:50.678675 | orchestrator | 2026-02-17 07:01:50.678688 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-17 07:01:50.678700 | orchestrator | Tuesday 17 February 2026 07:01:04 +0000 (0:00:02.071) 1:14:19.899 ****** 2026-02-17 07:01:50.678713 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.678725 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:01:50.678739 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:01:50.678752 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:01:50.678764 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:01:50.678775 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:01:50.678786 | orchestrator | 2026-02-17 07:01:50.678797 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-17 07:01:50.678808 | orchestrator | Tuesday 17 February 2026 07:01:06 +0000 (0:00:01.775) 1:14:21.674 ****** 2026-02-17 07:01:50.678819 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.678830 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:01:50.678841 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:01:50.678852 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:01:50.678862 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:01:50.678873 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:01:50.678884 | orchestrator | 2026-02-17 07:01:50.678895 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-17 07:01:50.678906 | orchestrator | Tuesday 17 February 2026 07:01:08 +0000 (0:00:02.090) 1:14:23.765 ****** 2026-02-17 07:01:50.678917 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.678927 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:01:50.678938 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:01:50.678949 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:01:50.678967 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:01:50.678982 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:01:50.678993 | orchestrator | 2026-02-17 07:01:50.679004 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-17 07:01:50.679016 | orchestrator | Tuesday 17 February 2026 07:01:10 +0000 (0:00:02.333) 1:14:26.099 ****** 2026-02-17 07:01:50.679027 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.679038 | orchestrator | 2026-02-17 07:01:50.679048 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-17 07:01:50.679059 | orchestrator | Tuesday 17 February 2026 07:01:13 +0000 (0:00:02.923) 1:14:29.023 ****** 2026-02-17 07:01:50.679070 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.679080 | orchestrator | 2026-02-17 07:01:50.679091 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-17 07:01:50.679102 | orchestrator | Tuesday 17 February 2026 07:01:16 +0000 (0:00:02.984) 1:14:32.008 ****** 2026-02-17 07:01:50.679113 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.679124 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:01:50.679134 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:01:50.679145 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:01:50.679156 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:01:50.679191 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:01:50.679204 | orchestrator | 2026-02-17 07:01:50.679215 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-17 07:01:50.679236 | orchestrator | Tuesday 17 February 2026 07:01:19 +0000 (0:00:02.569) 1:14:34.578 ****** 2026-02-17 07:01:50.679247 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.679257 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:01:50.679268 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:01:50.679279 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:01:50.679289 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:01:50.679300 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:01:50.679310 | orchestrator | 2026-02-17 07:01:50.679322 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-17 07:01:50.679350 | orchestrator | Tuesday 17 February 2026 07:01:21 +0000 (0:00:02.539) 1:14:37.117 ****** 2026-02-17 07:01:50.679363 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-17 07:01:50.679376 | orchestrator | 2026-02-17 07:01:50.679387 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-17 07:01:50.679398 | orchestrator | Tuesday 17 February 2026 07:01:24 +0000 (0:00:02.648) 1:14:39.766 ****** 2026-02-17 07:01:50.679408 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.679419 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:01:50.679430 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:01:50.679440 | orchestrator | ok: [testbed-node-3] 2026-02-17 07:01:50.679451 | orchestrator | ok: [testbed-node-4] 2026-02-17 07:01:50.679461 | orchestrator | ok: [testbed-node-5] 2026-02-17 07:01:50.679472 | orchestrator | 2026-02-17 07:01:50.679483 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-17 07:01:50.679494 | orchestrator | Tuesday 17 February 2026 07:01:27 +0000 (0:00:02.788) 1:14:42.554 ****** 2026-02-17 07:01:50.679505 | orchestrator | changed: [testbed-node-3] 2026-02-17 07:01:50.679516 | orchestrator | changed: [testbed-node-0] 2026-02-17 07:01:50.679527 | orchestrator | changed: [testbed-node-4] 2026-02-17 07:01:50.679538 | orchestrator | changed: [testbed-node-5] 2026-02-17 07:01:50.679549 | orchestrator | changed: [testbed-node-2] 2026-02-17 07:01:50.679559 | orchestrator | changed: [testbed-node-1] 2026-02-17 07:01:50.679570 | orchestrator | 2026-02-17 07:01:50.679581 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-17 07:01:50.679592 | orchestrator | 2026-02-17 07:01:50.679603 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 07:01:50.679614 | orchestrator | Tuesday 17 February 2026 07:01:31 +0000 (0:00:04.637) 1:14:47.192 ****** 2026-02-17 07:01:50.679625 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.679636 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:01:50.679653 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:01:50.679664 | orchestrator | 2026-02-17 07:01:50.679675 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 07:01:50.679686 | orchestrator | Tuesday 17 February 2026 07:01:33 +0000 (0:00:01.746) 1:14:48.939 ****** 2026-02-17 07:01:50.679697 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.679708 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:01:50.679719 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:01:50.679730 | orchestrator | 2026-02-17 07:01:50.679741 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-17 07:01:50.679752 | orchestrator | Tuesday 17 February 2026 07:01:35 +0000 (0:00:01.826) 1:14:50.766 ****** 2026-02-17 07:01:50.679763 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:01:50.679774 | orchestrator | 2026-02-17 07:01:50.679785 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-17 07:01:50.679796 | orchestrator | Tuesday 17 February 2026 07:01:37 +0000 (0:00:02.326) 1:14:53.092 ****** 2026-02-17 07:01:50.679807 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.679818 | orchestrator | 2026-02-17 07:01:50.679829 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-17 07:01:50.679840 | orchestrator | 2026-02-17 07:01:50.679857 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-17 07:01:50.679868 | orchestrator | Tuesday 17 February 2026 07:01:40 +0000 (0:00:02.289) 1:14:55.381 ****** 2026-02-17 07:01:50.679879 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.679890 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.679901 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.679912 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:01:50.679922 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:01:50.679933 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:01:50.679944 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:01:50.679955 | orchestrator | 2026-02-17 07:01:50.679966 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 07:01:50.679977 | orchestrator | Tuesday 17 February 2026 07:01:42 +0000 (0:00:02.071) 1:14:57.453 ****** 2026-02-17 07:01:50.679988 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.679999 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.680010 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.680020 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:01:50.680031 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:01:50.680042 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:01:50.680053 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:01:50.680063 | orchestrator | 2026-02-17 07:01:50.680074 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-17 07:01:50.680085 | orchestrator | Tuesday 17 February 2026 07:01:44 +0000 (0:00:02.586) 1:15:00.039 ****** 2026-02-17 07:01:50.680096 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.680107 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.680118 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.680129 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:01:50.680139 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:01:50.680150 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:01:50.680161 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:01:50.680231 | orchestrator | 2026-02-17 07:01:50.680244 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-17 07:01:50.680256 | orchestrator | Tuesday 17 February 2026 07:01:47 +0000 (0:00:02.505) 1:15:02.545 ****** 2026-02-17 07:01:50.680267 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.680279 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.680291 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.680302 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:01:50.680314 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:01:50.680325 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:01:50.680336 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:01:50.680348 | orchestrator | 2026-02-17 07:01:50.680359 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-17 07:01:50.680371 | orchestrator | Tuesday 17 February 2026 07:01:50 +0000 (0:00:02.801) 1:15:05.347 ****** 2026-02-17 07:01:50.680383 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:01:50.680395 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:01:50.680406 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:01:50.680425 | orchestrator | skipping: [testbed-node-3] 2026-02-17 07:02:41.088839 | orchestrator | skipping: [testbed-node-4] 2026-02-17 07:02:41.088958 | orchestrator | skipping: [testbed-node-5] 2026-02-17 07:02:41.088980 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.088996 | orchestrator | 2026-02-17 07:02:41.089012 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-17 07:02:41.089028 | orchestrator | 2026-02-17 07:02:41.089042 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-17 07:02:41.089057 | orchestrator | Tuesday 17 February 2026 07:01:53 +0000 (0:00:03.034) 1:15:08.382 ****** 2026-02-17 07:02:41.089071 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-17 07:02:41.089086 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-17 07:02:41.089130 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-17 07:02:41.089202 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089221 | orchestrator | 2026-02-17 07:02:41.089236 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-17 07:02:41.089249 | orchestrator | Tuesday 17 February 2026 07:01:54 +0000 (0:00:01.194) 1:15:09.576 ****** 2026-02-17 07:02:41.089265 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089279 | orchestrator | 2026-02-17 07:02:41.089294 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-17 07:02:41.089309 | orchestrator | Tuesday 17 February 2026 07:01:55 +0000 (0:00:01.214) 1:15:10.791 ****** 2026-02-17 07:02:41.089324 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089339 | orchestrator | 2026-02-17 07:02:41.089354 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-17 07:02:41.089369 | orchestrator | Tuesday 17 February 2026 07:01:56 +0000 (0:00:01.141) 1:15:11.932 ****** 2026-02-17 07:02:41.089383 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089397 | orchestrator | 2026-02-17 07:02:41.089413 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-17 07:02:41.089445 | orchestrator | Tuesday 17 February 2026 07:01:57 +0000 (0:00:01.187) 1:15:13.120 ****** 2026-02-17 07:02:41.089462 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089476 | orchestrator | 2026-02-17 07:02:41.089492 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-17 07:02:41.089506 | orchestrator | Tuesday 17 February 2026 07:01:59 +0000 (0:00:01.177) 1:15:14.297 ****** 2026-02-17 07:02:41.089522 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-17 07:02:41.089537 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-17 07:02:41.089552 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089567 | orchestrator | 2026-02-17 07:02:41.089582 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-17 07:02:41.089597 | orchestrator | Tuesday 17 February 2026 07:02:00 +0000 (0:00:01.266) 1:15:15.564 ****** 2026-02-17 07:02:41.089612 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089627 | orchestrator | 2026-02-17 07:02:41.089643 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-17 07:02:41.089658 | orchestrator | Tuesday 17 February 2026 07:02:01 +0000 (0:00:01.132) 1:15:16.696 ****** 2026-02-17 07:02:41.089672 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089688 | orchestrator | 2026-02-17 07:02:41.089702 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-17 07:02:41.089717 | orchestrator | Tuesday 17 February 2026 07:02:02 +0000 (0:00:01.168) 1:15:17.865 ****** 2026-02-17 07:02:41.089731 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089746 | orchestrator | 2026-02-17 07:02:41.089761 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-17 07:02:41.089777 | orchestrator | Tuesday 17 February 2026 07:02:03 +0000 (0:00:01.141) 1:15:19.006 ****** 2026-02-17 07:02:41.089791 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-17 07:02:41.089806 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-17 07:02:41.089821 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089835 | orchestrator | 2026-02-17 07:02:41.089850 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-17 07:02:41.089866 | orchestrator | Tuesday 17 February 2026 07:02:04 +0000 (0:00:01.153) 1:15:20.160 ****** 2026-02-17 07:02:41.089880 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089895 | orchestrator | 2026-02-17 07:02:41.089911 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-17 07:02:41.089926 | orchestrator | Tuesday 17 February 2026 07:02:06 +0000 (0:00:01.111) 1:15:21.272 ****** 2026-02-17 07:02:41.089941 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.089970 | orchestrator | 2026-02-17 07:02:41.089986 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-17 07:02:41.090000 | orchestrator | Tuesday 17 February 2026 07:02:07 +0000 (0:00:01.095) 1:15:22.367 ****** 2026-02-17 07:02:41.090080 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.090101 | orchestrator | 2026-02-17 07:02:41.090116 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-17 07:02:41.090141 | orchestrator | Tuesday 17 February 2026 07:02:08 +0000 (0:00:01.182) 1:15:23.549 ****** 2026-02-17 07:02:41.090184 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:02:41.090199 | orchestrator | 2026-02-17 07:02:41.090213 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-17 07:02:41.090229 | orchestrator | 2026-02-17 07:02:41.090243 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-17 07:02:41.090258 | orchestrator | Tuesday 17 February 2026 07:02:10 +0000 (0:00:01.995) 1:15:25.545 ****** 2026-02-17 07:02:41.090273 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:02:41.090287 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:02:41.090301 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:02:41.090317 | orchestrator | 2026-02-17 07:02:41.090331 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-17 07:02:41.090345 | orchestrator | Tuesday 17 February 2026 07:02:11 +0000 (0:00:01.433) 1:15:26.979 ****** 2026-02-17 07:02:41.090360 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:02:41.090374 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:02:41.090412 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:02:41.090428 | orchestrator | 2026-02-17 07:02:41.090443 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-17 07:02:41.090457 | orchestrator | Tuesday 17 February 2026 07:02:13 +0000 (0:00:01.392) 1:15:28.371 ****** 2026-02-17 07:02:41.090499 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:02:41.090517 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:02:41.090531 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:02:41.090545 | orchestrator | 2026-02-17 07:02:41.090559 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-17 07:02:41.090574 | orchestrator | Tuesday 17 February 2026 07:02:14 +0000 (0:00:01.807) 1:15:30.179 ****** 2026-02-17 07:02:41.090589 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:02:41.090604 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:02:41.090618 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:02:41.090632 | orchestrator | 2026-02-17 07:02:41.090647 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-17 07:02:41.090662 | orchestrator | Tuesday 17 February 2026 07:02:16 +0000 (0:00:01.481) 1:15:31.660 ****** 2026-02-17 07:02:41.090676 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:02:41.090691 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:02:41.090705 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:02:41.090720 | orchestrator | 2026-02-17 07:02:41.090734 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-17 07:02:41.090749 | orchestrator | Tuesday 17 February 2026 07:02:17 +0000 (0:00:01.475) 1:15:33.136 ****** 2026-02-17 07:02:41.090764 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:02:41.090778 | orchestrator | skipping: [testbed-node-1] 2026-02-17 07:02:41.090793 | orchestrator | skipping: [testbed-node-2] 2026-02-17 07:02:41.090809 | orchestrator | 2026-02-17 07:02:41.090823 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-17 07:02:41.090847 | orchestrator | Tuesday 17 February 2026 07:02:19 +0000 (0:00:01.701) 1:15:34.837 ****** 2026-02-17 07:02:41.090863 | orchestrator | skipping: [testbed-node-0] 2026-02-17 07:02:41.090877 | orchestrator | 2026-02-17 07:02:41.090893 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-17 07:02:41.090909 | orchestrator | 2026-02-17 07:02:41.090923 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-17 07:02:41.090938 | orchestrator | Tuesday 17 February 2026 07:02:21 +0000 (0:00:01.517) 1:15:36.355 ****** 2026-02-17 07:02:41.090963 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:02:41.090978 | orchestrator | 2026-02-17 07:02:41.090991 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-17 07:02:41.091000 | orchestrator | Tuesday 17 February 2026 07:02:22 +0000 (0:00:01.538) 1:15:37.893 ****** 2026-02-17 07:02:41.091009 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:02:41.091018 | orchestrator | 2026-02-17 07:02:41.091027 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-17 07:02:41.091035 | orchestrator | Tuesday 17 February 2026 07:02:23 +0000 (0:00:01.201) 1:15:39.095 ****** 2026-02-17 07:02:41.091044 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:02:41.091053 | orchestrator | 2026-02-17 07:02:41.091063 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-17 07:02:41.091077 | orchestrator | Tuesday 17 February 2026 07:02:25 +0000 (0:00:01.181) 1:15:40.276 ****** 2026-02-17 07:02:41.091091 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:02:41.091105 | orchestrator | 2026-02-17 07:02:41.091120 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-17 07:02:41.091134 | orchestrator | Tuesday 17 February 2026 07:02:27 +0000 (0:00:02.931) 1:15:43.207 ****** 2026-02-17 07:02:41.091193 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:02:41.091205 | orchestrator | 2026-02-17 07:02:41.091214 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-17 07:02:41.091223 | orchestrator | Tuesday 17 February 2026 07:02:31 +0000 (0:00:03.647) 1:15:46.855 ****** 2026-02-17 07:02:41.091232 | orchestrator | changed: [testbed-node-0] 2026-02-17 07:02:41.091241 | orchestrator | 2026-02-17 07:02:41.091250 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-17 07:02:41.091259 | orchestrator | 2026-02-17 07:02:41.091268 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-17 07:02:41.091277 | orchestrator | Tuesday 17 February 2026 07:02:33 +0000 (0:00:02.206) 1:15:49.061 ****** 2026-02-17 07:02:41.091285 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:02:41.091294 | orchestrator | ok: [testbed-node-1] 2026-02-17 07:02:41.091303 | orchestrator | ok: [testbed-node-2] 2026-02-17 07:02:41.091311 | orchestrator | 2026-02-17 07:02:41.091320 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-17 07:02:41.091329 | orchestrator | Tuesday 17 February 2026 07:02:35 +0000 (0:00:01.478) 1:15:50.540 ****** 2026-02-17 07:02:41.091338 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:02:41.091346 | orchestrator | 2026-02-17 07:02:41.091355 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-17 07:02:41.091364 | orchestrator | Tuesday 17 February 2026 07:02:37 +0000 (0:00:02.245) 1:15:52.785 ****** 2026-02-17 07:02:41.091372 | orchestrator | ok: [testbed-node-0] 2026-02-17 07:02:41.091381 | orchestrator | 2026-02-17 07:02:41.091390 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 07:02:41.091400 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-17 07:02:41.091411 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-17 07:02:41.091421 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-02-17 07:02:41.091430 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-02-17 07:02:41.091448 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-02-17 07:02:41.848662 | orchestrator | testbed-node-3 : ok=317  changed=20  unreachable=0 failed=0 skipped=355  rescued=0 ignored=0 2026-02-17 07:02:41.848816 | orchestrator | testbed-node-4 : ok=307  changed=17  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-02-17 07:02:41.848847 | orchestrator | testbed-node-5 : ok=303  changed=17  unreachable=0 failed=0 skipped=337  rescued=0 ignored=0 2026-02-17 07:02:41.848867 | orchestrator | 2026-02-17 07:02:41.848887 | orchestrator | 2026-02-17 07:02:41.848898 | orchestrator | 2026-02-17 07:02:41.848910 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 07:02:41.848922 | orchestrator | Tuesday 17 February 2026 07:02:41 +0000 (0:00:03.546) 1:15:56.332 ****** 2026-02-17 07:02:41.848933 | orchestrator | =============================================================================== 2026-02-17 07:02:41.848944 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 74.65s 2026-02-17 07:02:41.848955 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 73.34s 2026-02-17 07:02:41.848966 | orchestrator | Gather and delegate facts ---------------------------------------------- 32.02s 2026-02-17 07:02:41.848977 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.43s 2026-02-17 07:02:41.848988 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.75s 2026-02-17 07:02:41.849014 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.30s 2026-02-17 07:02:41.849026 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 28.70s 2026-02-17 07:02:41.849037 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 28.35s 2026-02-17 07:02:41.849048 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 27.38s 2026-02-17 07:02:41.849058 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.00s 2026-02-17 07:02:41.849069 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.88s 2026-02-17 07:02:41.849080 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 22.04s 2026-02-17 07:02:41.849090 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.37s 2026-02-17 07:02:41.849101 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.45s 2026-02-17 07:02:41.849112 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 13.74s 2026-02-17 07:02:41.849123 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.70s 2026-02-17 07:02:41.849134 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.34s 2026-02-17 07:02:41.849144 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.64s 2026-02-17 07:02:41.849190 | orchestrator | Stop ceph mon ---------------------------------------------------------- 11.58s 2026-02-17 07:02:41.849202 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.96s 2026-02-17 07:02:42.164003 | orchestrator | + osism apply cephclient 2026-02-17 07:02:44.242791 | orchestrator | 2026-02-17 07:02:44 | INFO  | Task c23391c0-83db-4a67-97d0-827b3acad4fa (cephclient) was prepared for execution. 2026-02-17 07:02:44.242920 | orchestrator | 2026-02-17 07:02:44 | INFO  | It takes a moment until task c23391c0-83db-4a67-97d0-827b3acad4fa (cephclient) has been started and output is visible here. 2026-02-17 07:03:03.345662 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-17 07:03:03.345742 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-17 07:03:03.345756 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-17 07:03:03.345761 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-17 07:03:03.345789 | orchestrator | 2026-02-17 07:03:03.345795 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-17 07:03:03.345800 | orchestrator | 2026-02-17 07:03:03.345805 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-17 07:03:03.345810 | orchestrator | Tuesday 17 February 2026 07:02:50 +0000 (0:00:01.432) 0:00:01.433 ****** 2026-02-17 07:03:03.345815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-17 07:03:03.345822 | orchestrator | 2026-02-17 07:03:03.345827 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-17 07:03:03.345831 | orchestrator | Tuesday 17 February 2026 07:02:51 +0000 (0:00:00.941) 0:00:02.374 ****** 2026-02-17 07:03:03.345836 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-17 07:03:03.345841 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-17 07:03:03.345846 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-17 07:03:03.345851 | orchestrator | 2026-02-17 07:03:03.345856 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-17 07:03:03.345860 | orchestrator | Tuesday 17 February 2026 07:02:52 +0000 (0:00:01.745) 0:00:04.120 ****** 2026-02-17 07:03:03.345865 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-17 07:03:03.345870 | orchestrator | 2026-02-17 07:03:03.345874 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-17 07:03:03.345879 | orchestrator | Tuesday 17 February 2026 07:02:54 +0000 (0:00:01.117) 0:00:05.237 ****** 2026-02-17 07:03:03.345884 | orchestrator | ok: [testbed-manager] 2026-02-17 07:03:03.345889 | orchestrator | 2026-02-17 07:03:03.345893 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-17 07:03:03.345899 | orchestrator | Tuesday 17 February 2026 07:02:55 +0000 (0:00:00.953) 0:00:06.191 ****** 2026-02-17 07:03:03.345904 | orchestrator | ok: [testbed-manager] 2026-02-17 07:03:03.345908 | orchestrator | 2026-02-17 07:03:03.345913 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-17 07:03:03.345918 | orchestrator | Tuesday 17 February 2026 07:02:55 +0000 (0:00:00.923) 0:00:07.115 ****** 2026-02-17 07:03:03.345922 | orchestrator | ok: [testbed-manager] 2026-02-17 07:03:03.345927 | orchestrator | 2026-02-17 07:03:03.345932 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-17 07:03:03.345937 | orchestrator | Tuesday 17 February 2026 07:02:57 +0000 (0:00:01.186) 0:00:08.301 ****** 2026-02-17 07:03:03.345942 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-17 07:03:03.345947 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-17 07:03:03.345951 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-17 07:03:03.345956 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-17 07:03:03.345961 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-17 07:03:03.345966 | orchestrator | 2026-02-17 07:03:03.345970 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-17 07:03:03.345985 | orchestrator | Tuesday 17 February 2026 07:03:01 +0000 (0:00:04.103) 0:00:12.405 ****** 2026-02-17 07:03:03.345990 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-17 07:03:03.345995 | orchestrator | 2026-02-17 07:03:03.346000 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-17 07:03:03.346004 | orchestrator | Tuesday 17 February 2026 07:03:01 +0000 (0:00:00.508) 0:00:12.914 ****** 2026-02-17 07:03:03.346009 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:03:03.346049 | orchestrator | 2026-02-17 07:03:03.346055 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-17 07:03:03.346060 | orchestrator | Tuesday 17 February 2026 07:03:01 +0000 (0:00:00.157) 0:00:13.072 ****** 2026-02-17 07:03:03.346064 | orchestrator | skipping: [testbed-manager] 2026-02-17 07:03:03.346073 | orchestrator | 2026-02-17 07:03:03.346078 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-17 07:03:03.346083 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-17 07:03:03.346088 | orchestrator | 2026-02-17 07:03:03.346093 | orchestrator | 2026-02-17 07:03:03.346097 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-17 07:03:03.346102 | orchestrator | Tuesday 17 February 2026 07:03:03 +0000 (0:00:01.123) 0:00:14.196 ****** 2026-02-17 07:03:03.346107 | orchestrator | =============================================================================== 2026-02-17 07:03:03.346112 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.10s 2026-02-17 07:03:03.346116 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.75s 2026-02-17 07:03:03.346121 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.19s 2026-02-17 07:03:03.346125 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.12s 2026-02-17 07:03:03.346130 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.12s 2026-02-17 07:03:03.346135 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-02-17 07:03:03.346170 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.94s 2026-02-17 07:03:03.346176 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.92s 2026-02-17 07:03:03.346181 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-02-17 07:03:03.346185 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-02-17 07:03:03.641535 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-17 07:03:03.641635 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-17 07:03:03.646772 | orchestrator | + set -e 2026-02-17 07:03:03.646850 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-17 07:03:03.646865 | orchestrator | ++ export INTERACTIVE=false 2026-02-17 07:03:03.646876 | orchestrator | ++ INTERACTIVE=false 2026-02-17 07:03:03.646886 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-17 07:03:03.646895 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-17 07:03:03.646906 | orchestrator | + source /opt/manager-vars.sh 2026-02-17 07:03:03.646916 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-17 07:03:03.646926 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-17 07:03:03.646935 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-17 07:03:03.646945 | orchestrator | ++ CEPH_VERSION=reef 2026-02-17 07:03:03.646955 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-17 07:03:03.646965 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-17 07:03:03.646975 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-17 07:03:03.646985 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-17 07:03:03.646994 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-17 07:03:03.647004 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-17 07:03:03.647014 | orchestrator | ++ export ARA=false 2026-02-17 07:03:03.647024 | orchestrator | ++ ARA=false 2026-02-17 07:03:03.647034 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-17 07:03:03.647043 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-17 07:03:03.647053 | orchestrator | ++ export TEMPEST=false 2026-02-17 07:03:03.647063 | orchestrator | ++ TEMPEST=false 2026-02-17 07:03:03.647072 | orchestrator | ++ export IS_ZUUL=true 2026-02-17 07:03:03.647082 | orchestrator | ++ IS_ZUUL=true 2026-02-17 07:03:03.647092 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 07:03:03.647102 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.198 2026-02-17 07:03:03.647111 | orchestrator | ++ export EXTERNAL_API=false 2026-02-17 07:03:03.647121 | orchestrator | ++ EXTERNAL_API=false 2026-02-17 07:03:03.647130 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-17 07:03:03.647231 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-17 07:03:03.647242 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-17 07:03:03.647252 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-17 07:03:03.647262 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-17 07:03:03.647272 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-17 07:03:03.647282 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-17 07:03:03.647291 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-17 07:03:03.647301 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-17 07:03:03.647972 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-17 07:03:03.653917 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-17 07:03:03.653984 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-17 07:03:03.653996 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-17 07:03:03.654007 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-17 07:03:25.966674 | orchestrator | 2026-02-17 07:03:25 | ERROR  | Unable to get ansible vault password 2026-02-17 07:03:25.966792 | orchestrator | 2026-02-17 07:03:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-17 07:03:25.966811 | orchestrator | 2026-02-17 07:03:25 | ERROR  | Dropping encrypted entries 2026-02-17 07:03:26.011847 | orchestrator | 2026-02-17 07:03:26 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-17 07:03:26.012505 | orchestrator | 2026-02-17 07:03:26 | INFO  | Kolla configuration check passed 2026-02-17 07:03:26.184510 | orchestrator | 2026-02-17 07:03:26 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-17 07:03:26.201356 | orchestrator | 2026-02-17 07:03:26 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-17 07:03:26.526075 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-17 07:03:47.029496 | orchestrator | 2026-02-17 07:03:47 | ERROR  | Unable to get ansible vault password 2026-02-17 07:03:47.029631 | orchestrator | 2026-02-17 07:03:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-17 07:03:47.029651 | orchestrator | 2026-02-17 07:03:47 | ERROR  | Dropping encrypted entries 2026-02-17 07:03:47.068577 | orchestrator | 2026-02-17 07:03:47 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-17 07:03:47.224403 | orchestrator | 2026-02-17 07:03:47 | INFO  | Found 205 classic queue(s) in vhost '/': 2026-02-17 07:03:47.224694 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-17 07:03:47.224712 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-17 07:03:47.224717 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-17 07:03:47.225221 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-17 07:03:47.225232 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - barbican.workers_fanout_1af7694776284d0ab086ce76e3f2e4d2 (vhost: /, messages: 0) 2026-02-17 07:03:47.225240 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - barbican.workers_fanout_5054f15d8dd34b30bf7505dd127500cb (vhost: /, messages: 0) 2026-02-17 07:03:47.225245 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - barbican.workers_fanout_5ec691455b5e408c8c8f4a0b1ba24169 (vhost: /, messages: 0) 2026-02-17 07:03:47.225251 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-17 07:03:47.225256 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central (vhost: /, messages: 1) 2026-02-17 07:03:47.225261 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.225266 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.225623 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.225632 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central_fanout_3e54501a7e534d2ba8f3c0e27699a721 (vhost: /, messages: 0) 2026-02-17 07:03:47.225659 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central_fanout_5c8214c5a2e84ee2a29bf3028de6f708 (vhost: /, messages: 0) 2026-02-17 07:03:47.225665 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central_fanout_701f5c0d27384ecea306f020d4a9b132 (vhost: /, messages: 0) 2026-02-17 07:03:47.225852 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central_fanout_b3e67643c9284aabb1124a65ce768006 (vhost: /, messages: 0) 2026-02-17 07:03:47.225999 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central_fanout_faa1e2978300427fb1c3080485854c15 (vhost: /, messages: 0) 2026-02-17 07:03:47.226458 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - central_fanout_ff46265e26b446a0ba0cdf1ec87f4cfc (vhost: /, messages: 0) 2026-02-17 07:03:47.226530 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-17 07:03:47.226538 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.226548 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.226614 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.226622 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-backup_fanout_988386ab7836499b9df2cb0ee2669299 (vhost: /, messages: 0) 2026-02-17 07:03:47.226752 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-backup_fanout_a3acead58d494dcd96a3f1938659d2b8 (vhost: /, messages: 0) 2026-02-17 07:03:47.227051 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-backup_fanout_b1410bdb7f54430da31ed9e6e4650046 (vhost: /, messages: 0) 2026-02-17 07:03:47.227063 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-17 07:03:47.227399 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.227410 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.227583 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.227596 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-scheduler_fanout_794b714af8cb4d4c9b844999f89458c5 (vhost: /, messages: 0) 2026-02-17 07:03:47.227889 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-scheduler_fanout_bf628803ef7342619c699e95ce7a0c0f (vhost: /, messages: 0) 2026-02-17 07:03:47.228292 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-scheduler_fanout_c00b374999fe4e719ef9159fd6b94a0c (vhost: /, messages: 0) 2026-02-17 07:03:47.228311 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-17 07:03:47.228316 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-17 07:03:47.228322 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.228327 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_d0356ef2889b4f70b8a857b9855ad5fa (vhost: /, messages: 0) 2026-02-17 07:03:47.228334 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-17 07:03:47.229297 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.229310 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_f03502395b124418a2352d7d4f918dee (vhost: /, messages: 0) 2026-02-17 07:03:47.229328 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-17 07:03:47.229334 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.229339 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_b66d331c81a245ea93ce72b75e510630 (vhost: /, messages: 0) 2026-02-17 07:03:47.229531 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume_fanout_5d2f7d296e404d2ab1704d61fd7dca18 (vhost: /, messages: 0) 2026-02-17 07:03:47.229545 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume_fanout_763bfb70fcf04480b757de64ef37730c (vhost: /, messages: 0) 2026-02-17 07:03:47.229551 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - cinder-volume_fanout_8c770fd2efdd411da611af961c914d85 (vhost: /, messages: 0) 2026-02-17 07:03:47.229556 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-17 07:03:47.229561 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-17 07:03:47.229600 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-17 07:03:47.229609 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-17 07:03:47.229614 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - compute_fanout_3e596eb51305459e8cc77635a12103fe (vhost: /, messages: 0) 2026-02-17 07:03:47.230064 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - compute_fanout_b9aecd1f3fad46b9afe4c14c54e62d16 (vhost: /, messages: 0) 2026-02-17 07:03:47.230075 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - compute_fanout_ba4f9570df424577bccda68e09540ac1 (vhost: /, messages: 0) 2026-02-17 07:03:47.230080 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-17 07:03:47.230086 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.230091 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.230204 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.230416 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - conductor_fanout_04e6e2987c3c493990aa0fe923b2297a (vhost: /, messages: 0) 2026-02-17 07:03:47.230631 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - conductor_fanout_10518ce4dea045bba3cd4a8837704dd3 (vhost: /, messages: 0) 2026-02-17 07:03:47.231309 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - conductor_fanout_5c5efbcc11264b2ba5025db13f8ec3a6 (vhost: /, messages: 0) 2026-02-17 07:03:47.231321 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - conductor_fanout_e882b4bb5fb9418a8d94b0d94d27b5a0 (vhost: /, messages: 0) 2026-02-17 07:03:47.231336 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - event.sample (vhost: /, messages: 9) 2026-02-17 07:03:47.231500 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-17 07:03:47.231509 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor.l7ej7cjinwkv (vhost: /, messages: 0) 2026-02-17 07:03:47.231669 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor.ml4z6thfu5tn (vhost: /, messages: 0) 2026-02-17 07:03:47.231912 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor.nhyyj44s6pmw (vhost: /, messages: 0) 2026-02-17 07:03:47.231929 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor_fanout_0d808df9b2cd40ba83003e120292807d (vhost: /, messages: 0) 2026-02-17 07:03:47.232028 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor_fanout_184c588432b5430688b7c0c348b6ee70 (vhost: /, messages: 0) 2026-02-17 07:03:47.232036 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor_fanout_236ab3821cb94263a62b8efaa3be1061 (vhost: /, messages: 0) 2026-02-17 07:03:47.232108 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor_fanout_4303ff6fe5c74ddab3e0aa6ba48d70f0 (vhost: /, messages: 0) 2026-02-17 07:03:47.232304 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor_fanout_6b5038ae38a04002adbb401f1adfa9e3 (vhost: /, messages: 0) 2026-02-17 07:03:47.232541 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor_fanout_c0b45c664a254435adf04d848d600edd (vhost: /, messages: 0) 2026-02-17 07:03:47.232549 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor_fanout_cfb1bd478bd64fa79d89af12414953f8 (vhost: /, messages: 0) 2026-02-17 07:03:47.232867 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - magnum-conductor_fanout_e74673cf29684782b159f56c03c6b458 (vhost: /, messages: 0) 2026-02-17 07:03:47.232876 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-17 07:03:47.232882 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.233002 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.233161 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.233947 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-data_fanout_65643937997c4130ab111c866f0c350e (vhost: /, messages: 0) 2026-02-17 07:03:47.234080 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-data_fanout_940070b9d55b4dd3b5b31333904e7fc9 (vhost: /, messages: 0) 2026-02-17 07:03:47.234099 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-data_fanout_c68e9d76c1a742209c8b1608c1c0665f (vhost: /, messages: 0) 2026-02-17 07:03:47.234122 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-17 07:03:47.234400 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.234734 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.234909 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.235121 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-scheduler_fanout_24c3d61a0a2941daae371766a8ca9124 (vhost: /, messages: 0) 2026-02-17 07:03:47.235290 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-scheduler_fanout_81976201c6694314be09a1498722ec16 (vhost: /, messages: 0) 2026-02-17 07:03:47.235477 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-scheduler_fanout_b53c0fbb0ab94bf0a746735a3f9404b9 (vhost: /, messages: 0) 2026-02-17 07:03:47.235714 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-17 07:03:47.235892 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-17 07:03:47.236047 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-17 07:03:47.236555 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-17 07:03:47.236598 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-share_fanout_4e7736a65b4648e29615a6fb17e91608 (vhost: /, messages: 0) 2026-02-17 07:03:47.236615 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-share_fanout_548c12f7d3844fecad9a49be68d171f4 (vhost: /, messages: 0) 2026-02-17 07:03:47.236803 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - manila-share_fanout_96b49356f1ca4d889aab1ffb6c950caf (vhost: /, messages: 0) 2026-02-17 07:03:47.236892 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-17 07:03:47.237058 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-17 07:03:47.237285 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-17 07:03:47.237516 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-17 07:03:47.237625 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-17 07:03:47.237884 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-17 07:03:47.238122 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-17 07:03:47.238210 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-17 07:03:47.238449 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.238919 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.238930 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.238935 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - octavia_provisioning_v2_fanout_114bfac1cb35450ab7098a208926d19c (vhost: /, messages: 0) 2026-02-17 07:03:47.239012 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - octavia_provisioning_v2_fanout_606c6f725abc447da8db0effc5e51b47 (vhost: /, messages: 0) 2026-02-17 07:03:47.239182 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - octavia_provisioning_v2_fanout_7dab66e20d3b44c7955e085e3106d52a (vhost: /, messages: 0) 2026-02-17 07:03:47.239383 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-17 07:03:47.239393 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.239470 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.239899 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.239908 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer_fanout_1fc1a6999f2b42e4a1edd895ea0004af (vhost: /, messages: 0) 2026-02-17 07:03:47.240021 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer_fanout_4acb2258d612446cbe2b4b2de97161a2 (vhost: /, messages: 0) 2026-02-17 07:03:47.240265 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer_fanout_95615a1efa934de6a22e179e43ee0792 (vhost: /, messages: 0) 2026-02-17 07:03:47.240502 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer_fanout_9bdc96f21fc84d6d9c39a3ea24758ac8 (vhost: /, messages: 0) 2026-02-17 07:03:47.240511 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer_fanout_c079f685f4784b2bb4a0484a7d872a82 (vhost: /, messages: 0) 2026-02-17 07:03:47.240597 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - producer_fanout_ee6c614b8b1a4e5b9fe0707b494a4ff6 (vhost: /, messages: 0) 2026-02-17 07:03:47.240915 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-17 07:03:47.240925 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.241193 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.241267 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.241692 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_4d0d7d8ced3e4ccea7eb9a0ba82c27ec (vhost: /, messages: 0) 2026-02-17 07:03:47.241720 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_578dcc32e5434f749b9bb83695ebeba3 (vhost: /, messages: 0) 2026-02-17 07:03:47.241825 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_834ddf12a2924c279f50a3bc67b10743 (vhost: /, messages: 0) 2026-02-17 07:03:47.241929 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_aac3a8918e76452d989559827094a395 (vhost: /, messages: 0) 2026-02-17 07:03:47.242004 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_b75d80ce3ec843b2930b5dacbc89c12d (vhost: /, messages: 0) 2026-02-17 07:03:47.242076 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_c423e34b414d4d6a8cd399bb304e0861 (vhost: /, messages: 0) 2026-02-17 07:03:47.242300 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_d26c93208ab84ac4baa48c47e1c2cd09 (vhost: /, messages: 0) 2026-02-17 07:03:47.242341 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_dc8f8911f2144bde9fccc9319bfe8221 (vhost: /, messages: 0) 2026-02-17 07:03:47.242493 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-plugin_fanout_f8d37e35cb0940b1aa673d9d136c62e6 (vhost: /, messages: 0) 2026-02-17 07:03:47.242567 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-17 07:03:47.242665 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.242773 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.243081 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.243090 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_09882709622a416ca6c8b3b86d3a899f (vhost: /, messages: 0) 2026-02-17 07:03:47.243234 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_100d69b87ae443978d3e088eb1453e50 (vhost: /, messages: 0) 2026-02-17 07:03:47.243450 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_1a74196c9faa4f22b37c4e2efba12b0a (vhost: /, messages: 0) 2026-02-17 07:03:47.243459 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_1ca9064e34b143f49055daa03289c6b1 (vhost: /, messages: 0) 2026-02-17 07:03:47.243524 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_1e47e5cc03f24b00862bf5f421e02657 (vhost: /, messages: 0) 2026-02-17 07:03:47.243732 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_3e29231f93304fbabe29aaa52d8979f0 (vhost: /, messages: 0) 2026-02-17 07:03:47.243913 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_55ebe56f41624957be54c7195dafe967 (vhost: /, messages: 0) 2026-02-17 07:03:47.243989 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_55fdab0cdb4c4cacba226759bc831142 (vhost: /, messages: 0) 2026-02-17 07:03:47.243997 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_621e2d73c130467b9fce0a4ac919aeee (vhost: /, messages: 0) 2026-02-17 07:03:47.244207 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_630d0cc5cb334ef5a668f5996ef00cac (vhost: /, messages: 0) 2026-02-17 07:03:47.244216 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_6e3d3851c8b746f187933e276911c91a (vhost: /, messages: 0) 2026-02-17 07:03:47.244308 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_8d94bf2ec7d64903925bb688c2eda152 (vhost: /, messages: 0) 2026-02-17 07:03:47.244433 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_8f6924abbb5b4fb7b97127de07ee6076 (vhost: /, messages: 0) 2026-02-17 07:03:47.244834 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_ae4671eb6d7f4f05972483fa5f4d8e47 (vhost: /, messages: 0) 2026-02-17 07:03:47.244852 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_af9f6319c4164655bd4688ec1392f4e5 (vhost: /, messages: 0) 2026-02-17 07:03:47.244974 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_b39fd5003dba4a469e601ffaeedf2a2d (vhost: /, messages: 0) 2026-02-17 07:03:47.245072 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_bb917f5dd2ed4be6ab89f762b7b98730 (vhost: /, messages: 0) 2026-02-17 07:03:47.245143 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-reports-plugin_fanout_cfcd3d195e8c4b649ace4d8c46430188 (vhost: /, messages: 0) 2026-02-17 07:03:47.245297 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-17 07:03:47.245383 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.245459 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.245544 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.245802 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_42c07f2f7e69478cb0c1226d3f9d5f1e (vhost: /, messages: 0) 2026-02-17 07:03:47.245860 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_54365cedfe944fcb987cad0147cf424d (vhost: /, messages: 0) 2026-02-17 07:03:47.245904 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_6bd7526e29d8413c9eb27c6227fb7f51 (vhost: /, messages: 0) 2026-02-17 07:03:47.246005 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_9fefb2b9c63142a8afc29c0680c72c4f (vhost: /, messages: 0) 2026-02-17 07:03:47.246092 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_b87be30ad1a74de6983d20f3a1aee802 (vhost: /, messages: 0) 2026-02-17 07:03:47.246293 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_c890de3fa5394d19b5bdc8f2af8a733e (vhost: /, messages: 0) 2026-02-17 07:03:47.246303 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_d5478789627749139e8ac54632d1fd44 (vhost: /, messages: 0) 2026-02-17 07:03:47.246538 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_da2aa285d5354c0fa1bdb45d997e1c16 (vhost: /, messages: 0) 2026-02-17 07:03:47.246547 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - q-server-resource-versions_fanout_f32762797aab49179d358cd5d2061ca5 (vhost: /, messages: 0) 2026-02-17 07:03:47.246686 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_00505df544384072a4b86a37723496b0 (vhost: /, messages: 0) 2026-02-17 07:03:47.246807 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_06e2c7f07485449cbdaeee2695ccd805 (vhost: /, messages: 0) 2026-02-17 07:03:47.246864 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_13a785b91e144483b7c9aa914888ba0e (vhost: /, messages: 0) 2026-02-17 07:03:47.246974 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_1848a60be01a4510a74db7b8ccf0d7b2 (vhost: /, messages: 0) 2026-02-17 07:03:47.247027 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_257a59c604524a0dab47ff7389f157d3 (vhost: /, messages: 1) 2026-02-17 07:03:47.247089 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_2eb64dab6d2b45e7a9bb9b55b4f971e8 (vhost: /, messages: 0) 2026-02-17 07:03:47.247178 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_38f03ddaa68f40a9b37f0e8234f68299 (vhost: /, messages: 0) 2026-02-17 07:03:47.247280 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_47e1f0ab6e6146829163fcbf10a6f670 (vhost: /, messages: 0) 2026-02-17 07:03:47.247556 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_485012c590ae482eaa696f1524e9673f (vhost: /, messages: 0) 2026-02-17 07:03:47.247773 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_8239286246a04ceb942f3cff148b05b8 (vhost: /, messages: 2) 2026-02-17 07:03:47.247782 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_8269ae0c22164d558c20ca62dbaceb54 (vhost: /, messages: 0) 2026-02-17 07:03:47.247787 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_9e06a1a12377423fade6c1dbc0879d7d (vhost: /, messages: 0) 2026-02-17 07:03:47.247942 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_a7592d2b62454239ba7def06e9da782a (vhost: /, messages: 0) 2026-02-17 07:03:47.247958 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_aab3dd270bf349cfa90613c2ba8acd53 (vhost: /, messages: 0) 2026-02-17 07:03:47.248210 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_b3dc7777b69a473bbbd033f577e69a39 (vhost: /, messages: 1) 2026-02-17 07:03:47.248220 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_c1e8cbec093d4cbd8132a50e843a0e63 (vhost: /, messages: 0) 2026-02-17 07:03:47.248226 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_c985dbe4ea9e41e79fe7f248d4a317f8 (vhost: /, messages: 0) 2026-02-17 07:03:47.248996 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_d344ff00abc6483f8b506d2b243eacff (vhost: /, messages: 0) 2026-02-17 07:03:47.249056 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - reply_e5a47afff68c422caaa143bdc81e60c2 (vhost: /, messages: 0) 2026-02-17 07:03:47.249064 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-17 07:03:47.249071 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.249083 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.249089 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.249123 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler_fanout_0d53971a760249c79ad8ccc339e527a6 (vhost: /, messages: 0) 2026-02-17 07:03:47.249159 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler_fanout_1c093e7fb4274f2e91dc7f8f51948682 (vhost: /, messages: 0) 2026-02-17 07:03:47.249169 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler_fanout_2cabfd88a12c45d180da74fa422cb954 (vhost: /, messages: 0) 2026-02-17 07:03:47.249175 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler_fanout_93c5d210fd2e4e82888c83869b13c7e6 (vhost: /, messages: 0) 2026-02-17 07:03:47.249370 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler_fanout_bf4015d588f3457bb45902727e64bd31 (vhost: /, messages: 0) 2026-02-17 07:03:47.249437 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - scheduler_fanout_f2c358ebceb741d88a27956d7002d582 (vhost: /, messages: 0) 2026-02-17 07:03:47.249458 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-17 07:03:47.249715 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-17 07:03:47.249901 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-17 07:03:47.250118 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-17 07:03:47.250147 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker_fanout_10217915fddd42478aebafab771e82fb (vhost: /, messages: 0) 2026-02-17 07:03:47.250213 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker_fanout_31802116969c4d8facc29cb63437e131 (vhost: /, messages: 0) 2026-02-17 07:03:47.250410 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker_fanout_71ac67e518334aa1a2d422b2f846fa3f (vhost: /, messages: 0) 2026-02-17 07:03:47.250419 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker_fanout_b4d9b7c2f5754122a87c08eee58bcff9 (vhost: /, messages: 0) 2026-02-17 07:03:47.250478 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker_fanout_c2af6a855bd74bca884f66d155b13c24 (vhost: /, messages: 0) 2026-02-17 07:03:47.250944 | orchestrator | 2026-02-17 07:03:47 | INFO  |  - worker_fanout_d6b4a1c7136049109d2e1195095665e9 (vhost: /, messages: 0) 2026-02-17 07:03:47.561930 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-17 07:03:49.617777 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-17 07:03:49.617875 | orchestrator | [--no-close-connections] [--quorum] 2026-02-17 07:03:49.617892 | orchestrator | [--vhost VHOST] 2026-02-17 07:03:49.617906 | orchestrator | [{list,delete,prepare,check}] 2026-02-17 07:03:49.617919 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-17 07:03:49.617933 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-17 07:03:50.369702 | orchestrator | ERROR 2026-02-17 07:03:50.369912 | orchestrator | { 2026-02-17 07:03:50.369950 | orchestrator | "delta": "2:04:48.825301", 2026-02-17 07:03:50.369973 | orchestrator | "end": "2026-02-17 07:03:49.926369", 2026-02-17 07:03:50.369995 | orchestrator | "msg": "non-zero return code", 2026-02-17 07:03:50.370015 | orchestrator | "rc": 2, 2026-02-17 07:03:50.370034 | orchestrator | "start": "2026-02-17 04:59:01.101068" 2026-02-17 07:03:50.370052 | orchestrator | } failure 2026-02-17 07:03:50.660043 | 2026-02-17 07:03:50.660174 | PLAY RECAP 2026-02-17 07:03:50.660239 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-17 07:03:50.660271 | 2026-02-17 07:03:50.893547 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-17 07:03:50.896051 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-17 07:03:51.648740 | 2026-02-17 07:03:51.648912 | PLAY [Post output play] 2026-02-17 07:03:51.666996 | 2026-02-17 07:03:51.667133 | LOOP [stage-output : Register sources] 2026-02-17 07:03:51.739046 | 2026-02-17 07:03:51.739368 | TASK [stage-output : Check sudo] 2026-02-17 07:03:52.596731 | orchestrator | sudo: a password is required 2026-02-17 07:03:52.780617 | orchestrator | ok: Runtime: 0:00:00.015380 2026-02-17 07:03:52.794273 | 2026-02-17 07:03:52.794492 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-17 07:03:52.830215 | 2026-02-17 07:03:52.830474 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-17 07:03:52.898916 | orchestrator | ok 2026-02-17 07:03:52.908259 | 2026-02-17 07:03:52.908423 | LOOP [stage-output : Ensure target folders exist] 2026-02-17 07:03:53.367000 | orchestrator | ok: "docs" 2026-02-17 07:03:53.367438 | 2026-02-17 07:03:53.666300 | orchestrator | ok: "artifacts" 2026-02-17 07:03:53.998418 | orchestrator | ok: "logs" 2026-02-17 07:03:54.018240 | 2026-02-17 07:03:54.018472 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-17 07:03:54.056053 | 2026-02-17 07:03:54.056323 | TASK [stage-output : Make all log files readable] 2026-02-17 07:03:54.361163 | orchestrator | ok 2026-02-17 07:03:54.370519 | 2026-02-17 07:03:54.370649 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-17 07:03:54.406177 | orchestrator | skipping: Conditional result was False 2026-02-17 07:03:54.423711 | 2026-02-17 07:03:54.423885 | TASK [stage-output : Discover log files for compression] 2026-02-17 07:03:54.448962 | orchestrator | skipping: Conditional result was False 2026-02-17 07:03:54.463180 | 2026-02-17 07:03:54.463351 | LOOP [stage-output : Archive everything from logs] 2026-02-17 07:03:54.507670 | 2026-02-17 07:03:54.507855 | PLAY [Post cleanup play] 2026-02-17 07:03:54.518862 | 2026-02-17 07:03:54.519017 | TASK [Set cloud fact (Zuul deployment)] 2026-02-17 07:03:54.574767 | orchestrator | ok 2026-02-17 07:03:54.588450 | 2026-02-17 07:03:54.588609 | TASK [Set cloud fact (local deployment)] 2026-02-17 07:03:54.624873 | orchestrator | skipping: Conditional result was False 2026-02-17 07:03:54.637292 | 2026-02-17 07:03:54.637458 | TASK [Clean the cloud environment] 2026-02-17 07:03:55.257639 | orchestrator | 2026-02-17 07:03:55 - clean up servers 2026-02-17 07:03:55.988054 | orchestrator | 2026-02-17 07:03:55 - testbed-manager 2026-02-17 07:03:56.067838 | orchestrator | 2026-02-17 07:03:56 - testbed-node-1 2026-02-17 07:03:56.162104 | orchestrator | 2026-02-17 07:03:56 - testbed-node-5 2026-02-17 07:03:56.253833 | orchestrator | 2026-02-17 07:03:56 - testbed-node-3 2026-02-17 07:03:56.348181 | orchestrator | 2026-02-17 07:03:56 - testbed-node-0 2026-02-17 07:03:56.440090 | orchestrator | 2026-02-17 07:03:56 - testbed-node-4 2026-02-17 07:03:56.531085 | orchestrator | 2026-02-17 07:03:56 - testbed-node-2 2026-02-17 07:03:56.617542 | orchestrator | 2026-02-17 07:03:56 - clean up keypairs 2026-02-17 07:03:56.636898 | orchestrator | 2026-02-17 07:03:56 - testbed 2026-02-17 07:03:56.662930 | orchestrator | 2026-02-17 07:03:56 - wait for servers to be gone 2026-02-17 07:04:07.598393 | orchestrator | 2026-02-17 07:04:07 - clean up ports 2026-02-17 07:04:07.790181 | orchestrator | 2026-02-17 07:04:07 - 3e6ebf1e-d7ec-4c5b-9397-f0125312d223 2026-02-17 07:04:08.044285 | orchestrator | 2026-02-17 07:04:08 - 45f277ca-7c07-4bd4-b8af-94e234f06e1d 2026-02-17 07:04:08.311460 | orchestrator | 2026-02-17 07:04:08 - 719c041d-5be7-441e-aad7-bde693c77e48 2026-02-17 07:04:08.557017 | orchestrator | 2026-02-17 07:04:08 - 8cf924c8-726f-419d-85cd-447215e794ce 2026-02-17 07:04:08.939204 | orchestrator | 2026-02-17 07:04:08 - af89dfba-3e91-4573-9160-a604b2f03ae7 2026-02-17 07:04:09.162404 | orchestrator | 2026-02-17 07:04:09 - b0e534a3-1926-4a67-9735-234a2bc1559e 2026-02-17 07:04:09.457886 | orchestrator | 2026-02-17 07:04:09 - ccc3206f-8554-4d4b-9a88-b09be47e7db3 2026-02-17 07:04:09.709634 | orchestrator | 2026-02-17 07:04:09 - clean up volumes 2026-02-17 07:04:09.852185 | orchestrator | 2026-02-17 07:04:09 - testbed-volume-2-node-base 2026-02-17 07:04:09.894071 | orchestrator | 2026-02-17 07:04:09 - testbed-volume-1-node-base 2026-02-17 07:04:09.934825 | orchestrator | 2026-02-17 07:04:09 - testbed-volume-4-node-base 2026-02-17 07:04:09.974552 | orchestrator | 2026-02-17 07:04:09 - testbed-volume-0-node-base 2026-02-17 07:04:10.019964 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-3-node-base 2026-02-17 07:04:10.063156 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-5-node-base 2026-02-17 07:04:10.108385 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-manager-base 2026-02-17 07:04:10.151662 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-6-node-3 2026-02-17 07:04:10.195860 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-4-node-4 2026-02-17 07:04:10.239293 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-1-node-4 2026-02-17 07:04:10.285193 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-2-node-5 2026-02-17 07:04:10.326545 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-0-node-3 2026-02-17 07:04:10.371157 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-8-node-5 2026-02-17 07:04:10.413447 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-3-node-3 2026-02-17 07:04:10.454489 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-5-node-5 2026-02-17 07:04:10.499285 | orchestrator | 2026-02-17 07:04:10 - testbed-volume-7-node-4 2026-02-17 07:04:10.540529 | orchestrator | 2026-02-17 07:04:10 - disconnect routers 2026-02-17 07:04:10.660718 | orchestrator | 2026-02-17 07:04:10 - testbed 2026-02-17 07:04:12.162863 | orchestrator | 2026-02-17 07:04:12 - clean up subnets 2026-02-17 07:04:12.225910 | orchestrator | 2026-02-17 07:04:12 - subnet-testbed-management 2026-02-17 07:04:12.435827 | orchestrator | 2026-02-17 07:04:12 - clean up networks 2026-02-17 07:04:12.603913 | orchestrator | 2026-02-17 07:04:12 - net-testbed-management 2026-02-17 07:04:12.893388 | orchestrator | 2026-02-17 07:04:12 - clean up security groups 2026-02-17 07:04:12.939495 | orchestrator | 2026-02-17 07:04:12 - testbed-node 2026-02-17 07:04:13.047795 | orchestrator | 2026-02-17 07:04:13 - testbed-management 2026-02-17 07:04:13.180370 | orchestrator | 2026-02-17 07:04:13 - clean up floating ips 2026-02-17 07:04:13.285061 | orchestrator | 2026-02-17 07:04:13 - 81.163.193.198 2026-02-17 07:04:13.633593 | orchestrator | 2026-02-17 07:04:13 - clean up routers 2026-02-17 07:04:13.697832 | orchestrator | 2026-02-17 07:04:13 - testbed 2026-02-17 07:04:14.706171 | orchestrator | ok: Runtime: 0:00:19.659193 2026-02-17 07:04:14.710666 | 2026-02-17 07:04:14.710925 | PLAY RECAP 2026-02-17 07:04:14.711080 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-17 07:04:14.711152 | 2026-02-17 07:04:14.842014 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-17 07:04:14.843400 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-17 07:04:15.585409 | 2026-02-17 07:04:15.585642 | PLAY [Cleanup play] 2026-02-17 07:04:15.601666 | 2026-02-17 07:04:15.601799 | TASK [Set cloud fact (Zuul deployment)] 2026-02-17 07:04:15.655475 | orchestrator | ok 2026-02-17 07:04:15.663475 | 2026-02-17 07:04:15.663608 | TASK [Set cloud fact (local deployment)] 2026-02-17 07:04:15.688930 | orchestrator | skipping: Conditional result was False 2026-02-17 07:04:15.699374 | 2026-02-17 07:04:15.699540 | TASK [Clean the cloud environment] 2026-02-17 07:04:16.828800 | orchestrator | 2026-02-17 07:04:16 - clean up servers 2026-02-17 07:04:17.288748 | orchestrator | 2026-02-17 07:04:17 - clean up keypairs 2026-02-17 07:04:17.306627 | orchestrator | 2026-02-17 07:04:17 - wait for servers to be gone 2026-02-17 07:04:17.350919 | orchestrator | 2026-02-17 07:04:17 - clean up ports 2026-02-17 07:04:17.474841 | orchestrator | 2026-02-17 07:04:17 - clean up volumes 2026-02-17 07:04:17.538509 | orchestrator | 2026-02-17 07:04:17 - disconnect routers 2026-02-17 07:04:17.561347 | orchestrator | 2026-02-17 07:04:17 - clean up subnets 2026-02-17 07:04:17.581253 | orchestrator | 2026-02-17 07:04:17 - clean up networks 2026-02-17 07:04:17.762077 | orchestrator | 2026-02-17 07:04:17 - clean up security groups 2026-02-17 07:04:17.801062 | orchestrator | 2026-02-17 07:04:17 - clean up floating ips 2026-02-17 07:04:17.833900 | orchestrator | 2026-02-17 07:04:17 - clean up routers 2026-02-17 07:04:18.240200 | orchestrator | ok: Runtime: 0:00:01.405344 2026-02-17 07:04:18.244024 | 2026-02-17 07:04:18.244175 | PLAY RECAP 2026-02-17 07:04:18.244294 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-17 07:04:18.244357 | 2026-02-17 07:04:18.379513 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-17 07:04:18.381732 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-17 07:04:19.132746 | 2026-02-17 07:04:19.132906 | PLAY [Base post-fetch] 2026-02-17 07:04:19.148481 | 2026-02-17 07:04:19.148614 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-17 07:04:19.203806 | orchestrator | skipping: Conditional result was False 2026-02-17 07:04:19.210403 | 2026-02-17 07:04:19.210587 | TASK [fetch-output : Set log path for single node] 2026-02-17 07:04:19.253378 | orchestrator | ok 2026-02-17 07:04:19.260115 | 2026-02-17 07:04:19.260228 | LOOP [fetch-output : Ensure local output dirs] 2026-02-17 07:04:19.772097 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/60dbd9ca26984ddd92da8341bdfc7b56/work/logs" 2026-02-17 07:04:20.061396 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/60dbd9ca26984ddd92da8341bdfc7b56/work/artifacts" 2026-02-17 07:04:20.344275 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/60dbd9ca26984ddd92da8341bdfc7b56/work/docs" 2026-02-17 07:04:20.366275 | 2026-02-17 07:04:20.366481 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-17 07:04:21.319642 | orchestrator | changed: .d..t...... ./ 2026-02-17 07:04:21.319960 | orchestrator | changed: All items complete 2026-02-17 07:04:21.320008 | 2026-02-17 07:04:22.053203 | orchestrator | changed: .d..t...... ./ 2026-02-17 07:04:22.801993 | orchestrator | changed: .d..t...... ./ 2026-02-17 07:04:22.841185 | 2026-02-17 07:04:22.841402 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-17 07:04:22.881095 | orchestrator | skipping: Conditional result was False 2026-02-17 07:04:22.883570 | orchestrator | skipping: Conditional result was False 2026-02-17 07:04:22.904688 | 2026-02-17 07:04:22.904844 | PLAY RECAP 2026-02-17 07:04:22.904924 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-17 07:04:22.904962 | 2026-02-17 07:04:23.048822 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-17 07:04:23.051145 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-17 07:04:23.834294 | 2026-02-17 07:04:23.834469 | PLAY [Base post] 2026-02-17 07:04:23.848985 | 2026-02-17 07:04:23.849114 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-17 07:04:24.824088 | orchestrator | changed 2026-02-17 07:04:24.833225 | 2026-02-17 07:04:24.833363 | PLAY RECAP 2026-02-17 07:04:24.833454 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-17 07:04:24.833527 | 2026-02-17 07:04:24.977895 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-17 07:04:24.980398 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-17 07:04:25.813063 | 2026-02-17 07:04:25.813258 | PLAY [Base post-logs] 2026-02-17 07:04:25.825951 | 2026-02-17 07:04:25.826095 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-17 07:04:26.282559 | localhost | changed 2026-02-17 07:04:26.296508 | 2026-02-17 07:04:26.296673 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-17 07:04:26.323888 | localhost | ok 2026-02-17 07:04:26.328503 | 2026-02-17 07:04:26.328637 | TASK [Set zuul-log-path fact] 2026-02-17 07:04:26.345464 | localhost | ok 2026-02-17 07:04:26.356011 | 2026-02-17 07:04:26.356123 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-17 07:04:26.381537 | localhost | ok 2026-02-17 07:04:26.385769 | 2026-02-17 07:04:26.385894 | TASK [upload-logs : Create log directories] 2026-02-17 07:04:26.908333 | localhost | changed 2026-02-17 07:04:26.915930 | 2026-02-17 07:04:26.916154 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-17 07:04:27.440060 | localhost -> localhost | ok: Runtime: 0:00:00.009551 2026-02-17 07:04:27.445539 | 2026-02-17 07:04:27.445699 | TASK [upload-logs : Upload logs to log server] 2026-02-17 07:04:28.044344 | localhost | Output suppressed because no_log was given 2026-02-17 07:04:28.046956 | 2026-02-17 07:04:28.047095 | LOOP [upload-logs : Compress console log and json output] 2026-02-17 07:04:28.108040 | localhost | skipping: Conditional result was False 2026-02-17 07:04:28.114305 | localhost | skipping: Conditional result was False 2026-02-17 07:04:28.127070 | 2026-02-17 07:04:28.127260 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-17 07:04:28.174777 | localhost | skipping: Conditional result was False 2026-02-17 07:04:28.175264 | 2026-02-17 07:04:28.179207 | localhost | skipping: Conditional result was False 2026-02-17 07:04:28.190332 | 2026-02-17 07:04:28.190538 | LOOP [upload-logs : Upload console log and json output]